Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Self-driving Uber car kills Arizona woman crossing street (reuters.com)
2361 points by kgwgk on March 19, 2018 | hide | past | favorite | 1766 comments



One aspect that comes from this is that now car crashes can be treated more like aircraft crashes. Each self-driving car now has a black box in it with a ton of telemetary.

So it's not just "don't drink and drive", knowing that they'll probably reoffend soon anyway. Every crash and especially fatality can be thoroughly investigated and should be prevented from ever happening again.

Hopefully there's enough data in the investigation so that Tesla / Waymo and all other car companies can include the circumstances of the failure in their tests.


Every lesson from aviation is earned in blood. This death wasn't necessary though. The Otto/Uber guys have been informed about their car's difficulty sensing and stopping for pedestrians. I know this because I informed them myself when one almost ran me down in a crosswalk in SF. Can't learn anything from your lessons unless you listen. Maybe they can pause and figure out how to actually listen to their reports of unsafe vehicle behavior.


Police are reporting that it never slowed down. Hitting the pedestrian at 40 mph.


It's interesting that the car was exceeding the speed limit of 35 mph. I would assume the car would stay at or below the speed limit. Who gets the speeding ticket in this case? Does 5 mph affect the reaction time such that it could have noticed and taken evasive action?


>>Who gets the speeding ticket in this case?

Whoever owns the algorithm. Or at-least in whoever's name the license/permission was issued. If its an organization, the top management signing off on this has to take the blame.


Legally, the person behind the wheel was still the driver. They are responsible for both the speeding and for killing a pedestrian. At this stage it's no different than using cruise control - you are still responsible for what happens.


I really hope you're wrong. If the legal system doesn't distinguish between cruise control and SAE level 3 autonomy, the legal system needs to get its shit together.


IMO as long as there is a human in the driver’s seat who is expected to intervene, they should bear the consequences of failing to do so.


No, that's bullshit. It's physically impossible for a human to intervene on the timescales involved in motor accidents. Autonomy that requires an ever-vigilant driver to be ready to intervene at any second is literally worse than no autonomy at all; because if the driver isn't actively driving most of the time, their attention is guaranteed to stray.


I agree with you - but that's literally the stage we're at. What we have right now is like "advanced" cruise control - the person behind the wheel is still legally defined as the driver and bears responsibility for what happens. The law "allows" these systems on the road, but there is no framework out there which would shift the responsibility to anyone else but the person behind the wheel.

>> It's physically impossible for a human to intervene on the timescales involved in motor accidents.

That remains true even without any automatic driving tech - you are responsible even for accidents which happen too quickly for anyone to intervene. Obviously if you have some evidence(dashcam) showing that you couldn't avoid the accident you should be found not guilty, but the person going to court will be you - not the maker of your car's cruise control/radar system/whatever.


Currently have two cars; one Mazda '14 3 with AEB, Lane Departure alert, radar cruise, BLIS, rear cross alert - and the other an '11 Outback with none of that (but DSC and ABS, as well as AWD).

The assists are certainly helping more than anything, so I feel that the Mazda is much safer to drive in heavy traffic than the older Outback.

The cruise has autonomy over controlling the speed only, and applying brakes, but it is still autonomy. Of course since my hands never leave the wheel it may not fit with what you have in mind.

Having said that, Mazda (or Bosch?) really nailed their radar, having never failed to pick up motorbike riders even though the manual warns us to not expect it to work.

I feel more confident in a system where the ambition is smaller, yet execution more solid.

Fwiw I also tested the AEB against cardboard boxes driving through them at 30km/h not moving accelerator at all, and came away very impressed by the system. It intervened so last second I felt for sure it wasn't going to work, but it did - first time was a very slight impact, next two were complete stops with small margins.

This stuff is guaranteed to save lives and prevent costly crashes (I generally refuse to use the word "accident") on a grander scale.


The latest top end Toyota Rav4’s have that too. It’s quite amazing how well they are able to keep cruise control and maintain distance behind a car.

I do love that even though they have a ton of driver alerting features, hands have to be on the wheel at all times.

Either you have full autonomy without hands or you don’t. There is no middle ground, it’s a recipe for a disaster.


Bullshit?? It may be autonomous but these cars are still far away from driverless. YOU get in the car, you know the limitations, you just said you even consider yourself physically incapable of responding in time to motor accidents, and that the safety will worse than a non autonomous car. Sounds to me what's bullshit is your entitlement to step into an autonomous vehicle when you know it diminishes road safety. Autonomous vehicles can in theory become safer than human drivers, what is bullshit is that you want to drive them now, when they are strictly not yet safer than a human, but do so without consequences.


I attended an Intelligent Transport Systems (ITS) summit last year in Australia. The theme very much centred around Autonomous Cars and the legality, insurance/liabilities and enhancements.

There are several states is USA that are more progressive than others (CA namely). But with many working groups in and around the legal side - it hopefully will be a thing of the past.

In Australia, they are mandating by some year soon (don't have it on hand) that to achieve a Safety Rating of 5 star, some level of automation needs to exist. Such as lane departure or ABS will become as standard as aircon.


Assuming ABS means "Anti-Lock Braking System" in this context, isn't that already standard? I can't think of a (recent) car with an ANCAP rating of 5 that doesn't have ABS. I'm not sure I would even classify ABS as automation in the same way that something like lane departure is automation. ABS has been around (in some form) since the 1950s, and works by just adjusting braking based on the relative turning rates of each wheel. Compared to lane departure, ABS is more like a tire pressure sensor.


Generally ABS does mean anti-lock braking system, but my guess is that they meant "Automatic Braking System"?


It creates an incentive to buy autonomous cars that are well programmed.


Does this responsibility stay with the driver, despite this clearly being an Uber operation? Aside from the victim, did self-driving tech just get its first, uhm, "marter"?


By law(and please correct me if I'm wrong), the driver of the vehicle is responsible for everything that happens with the vehicle. Why would it matter if the vehicle is owned by UPS, Fedex, PizzaHut or Uber? Is a truck driver not responsible for an accident just because they drive for a larger corporation?

Let me put it this way - my Mercedes has an emergency stop feature when it detects pedestrians in front of the car. If I'm on cruise control and the car hits someone, could I possibly blame it on Mercedes? Of course not. I'm still the driver behind the wheel and those systems are meant to help - not replace my attention.

What we have now in these semi-autonomous vehicles is nothing more than a glorified cruise control - and I don't think the law treats it any differently(at least yet.).

Now, if Uber(or anyone else) builds cars with no driver at all - sure, we can start talking about shifting the responsibility to the corporation. But for now, the driver is behind the wheel for a reason.


From the article:

The San Francisco Chronicle late Monday reported that Tempe Police Chief Sylvia Moir said that from viewing videos taken from the vehicle “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway." (bit.ly/2IADRUF)

Moir told the Chronicle, “I suspect preliminarily it appears that the Uber would likely not be at fault in this accident,” but she did not rule out that charges could be filed against the operator in the Uber vehicle, the paper reported.


I would be interested in hearing more about how he qualified that statement. Are shadows a known limitation with some or only Ubers systems?


The measured speed was 38mph. That is within 10% of the posted speed.


Driving rules in the UK have changed, since at least a decade ago, so that there is no 10% margin. Speedometers are required by law to read on or under and they are more reliable now. So if you're going 36mph then you'd be fined.

On top of the speedometer it has the GPS speed to compare as well, I can't see how there is any excuse for being over the limit.

The quoted stats from UK advertising were that at 40mph 80% of pedestrians will die from the crash, at 30mph 20% will die.

Had the car been doing just under the limt e.g. 33mph then there's a much better chance that the woman would have survived.


I cannot find a reference to backup your claim of the 10% + 2mph margin having been axed. In fact I remembered the Chief Constable calling for the end of it recently (implying it was still being used):

http://www.dailymail.co.uk/news/article-5332443/Police-chief... [30 January 2018]

https://www.telegraph.co.uk/news/2018/01/31/motorists-should...

Can you explain why you think this rule changed a decade ago?


Just what my partner was told when they were caught speeding and offered a course in how to avoid speeding instead of getting points on her licence.


That's not how posted speed limits work in Tempe though. Traffic flows an average of 5-10mph above the posted limit.


Isn't that still too fast? Maybe not worth ticketing for, but still relevant after an incident like this?


So when the road sign says 35mph it means the official speed limit is exactly 38.5mph?

Because sometimes that 10% is argued as a margin of error for humans supposedly not paying attention how fast they're going, but if that's the case then there's really no reason why the robot shouldn't drive strictly under the speed limit.

If you explicitly programmed a fleet of robots to deliberately break the law, then I think it's not enough consequences if you just fine for the first robot that gets caught breaking that law, while the programmers adjust the code of the fleet to not get caught again.

Consequences should be more severe if there's a whole fleet of robots programmed to break the law, even if the law catches the first robot right away and the rest of the fleet is paused immediately.


Should be noted that speedometers display a higher number than actual speed. So if cop flags driver at 38.5 mph, there's a good chance their speedometer showed 40+ mph.


It's said she's been hit immediately after entering a car lane outside a crosswalk. Quite possibly there was no time for the autopilot to react at all. I hope all video footage for self-driving crashes is mandatorily released.


From the other post on this:[0]

> Chief of Police Sylvia Moir told the San Francisco Chronicle on Monday that video footage taken from cameras equipped to the autonomous Volvo SUV potentially shift the blame to the victim herself, 49-year-old Elaine Herzberg, rather than the vehicle.

> “It’s very clear it would have been difficult to avoid this collision in any kind of mode [autonomous or human-driven] based on how she came from the shadows right into the roadway,” Moir told the paper, adding that the incident occurred roughly 100 yards from a crosswalk. “It is dangerous to cross roadways in the evening hour when well-illuminated managed crosswalks are available,” she said.

0) http://fortune.com/2018/03/19/uber-self-driving-car-crash/


Non-driving advocates have pointed out that many investigations of car crashes with pedestrians and cyclists tend to blame the pedestrian/cyclist by reflex and generally refuse to search for exculpatory evidence.

Based on the layout of the presumed crash site (namely, the median had a paved section that would effectively make this an unmarked crosswalk), and based on the fact that the damage being all on the passenger's side (which is to say, the pedestrian would have had to have crossed most of the lane before being struck), I would expect that there is a rather lot that could have been done on the driver's side (whether human or autonomous) to avoid the crash.


Your passenger's side comment didn’t make sense to me until I read the Forbes article linked above:

> Herzberg is said to have abruptly walked from a center median into a lane with traffic

So that explains that. However, contrary to the thrust of your argument, the experience of the sober driver, who was ready to intervene if needed, is hard to dismiss:

> “The driver said it was like a flash, the person walked out in front of them,” Moir said. “His first alert to the collision was the sound of the collision.”

And also:

> “It’s very clear it would have been difficult to avoid this collision in any kind of mode [autonomous or human-driven] based on how she came from the shadows right into the roadway,” Moir told the paper, adding that the incident occurred roughly 100 yards from a crosswalk. “It is dangerous to cross roadways in the evening hour when well-illuminated managed crosswalks are available,” she said.


Yes, I see. So she walked maybe 2 meters into the lane before being hit. At a slow walk (1 meter/second) that's 2 seconds. At 17 meters/second, that's 34 meters. And it's about twice nominal disengagement time. So yes, it's iffy.


And at a moderate sprint, like most adults do when they try to cross a roadway with vehicular traffic, that is 4-5 m/s, giving the vehicle 0.4 - 0.5 seconds to stop. 40 MPH ~ 18 m/s that gives the vehicle 7-9 meters to stop.

No human could brake that well, and simply jamming the brakes would engage the ABS leading to a longer stopping distance. Not to mention the human reaction time of 0.5 - 0.75 seconds would have prevented most people from even lifting the foot off the accelerator pedal before the collision, even if they were perfectly focused on driving.


> simply jamming the brakes would engage the ABS leading to a longer stopping distance

I was taught that the entire point of ABS is so that you can just jam the brake and have the shortest stopping time instead of modulating it yourself to avoid skidding. Do you have any source to the contrary?


ABS is intended to enable steering by increasing static road friction. It is not intended to decrease stopping distance, and in many cases increases stopping distance by keeping the negative G's away from the hard limit in anticipation of lateral G's due to steering.

Older dumb ABS systems would simply "pump the pedal" for the driver, and would increase stopping distance in almost all conditions, especially single-channel systems. Newer systems determine breaking performance via the conventional ABS sensors and additionally accelerometers. These systems will back off N G's, then increase the G's bisecting the known-locked and known-unlocked condition, trying to find the optimum. These systems _will_ stop the car in the minimum distance possible, but very few cars use it.


I was taught the point of ABS was to keep control over steering while stepping on the brakes instead of skidding out of control into god knows what/who

Wikipedia backs me up but adds that it also decreases stopping distance on dry and slippery surfaces, while significantly increasing stopping distances in snow and gravel. I’m from a country with a lot of snow so that makes sense.


That's correct. The ABS basically takes away the brake pressure as soon as the wheels block. On most surfaces this will shorten your stopping distance versus a human blocking the tires. It is never the optimal stopping distance though.

In terms of split second reactions, it's pretty much optimal still to just jam the brakes if you have ABS. It's much better than braking too little, which is what most non-ABS drivers would have done.


When you block the wheels in loose snow or gravel, it piles up in front of the tire and provides a fair amount of friction. This is usually the fastest way to stop, and one of the reasons that gravel pits along corners in motor racing are so effective.

That said, the point of ABS is in the rare event that you have to brake full power, the system automatically help you do it at a near optimal (slightly skidding) without additional input, and you remain full steering ability.

If you don't have ABS you'd need to train that emergency stop ability on a daily basis to even come close.


> Wikipedia backs me up but adds that it also decreases stopping distance on dry and slippery surfaces,

Many cars, such as my POS Ford Focus, use a single-channel ABS system. These systems will oscillate all four brakes even if only one is locked. Combined with the rear-wheel drum brakes, the ABS considerably increased stopping distances on dry road.


From my experience of walking my bicycle, you are slower than usual when doing so, and it's pretty difficult to abruptly change direction in that situation. I would be curious to know what is the FOV of the camera that recorded her.


Yes, I also assumed that. Back when I rode a lot, I don't recall sprinting across roadways with my bike. Also, from the photo, she had a heavy bike, with a front basket. And yes, they ought to release the car's video.


I don't think we have grounds to just assume it was a slow walk.


er... LIDAR needs ambient light now? Also if you look on Google Street View, the pedestrian entered the road from a median crossing that you can see straight down the middle of from the road hundreds of feet away. I bet they don't release the footage from the car though ;)


Sensor Fusion typically merges LiDAR with stereoscopic camera feeds, the latter requiring ambient light.


This is just victim blaming.


Isn't that an overly broad use of the term? I mean, if someone steps in front of a moving vehicle, from between parked vehicles, the driver may have only a few msec to react. Whose fault is it then?

Maybe it's society's fault, for building open-access roadways where vehicles exceed a few km/h.


I think you’re right about the street design being the main cause in this case. A street with people on it should be designed so that drivers naturally drive at slow, safe speeds. The intersection in question is designed for high speed. https://www.strongtowns.org/journal/2018/2/2/forgiving-desig...


I don't remember reading about parked vehicles. Accident location seems to be too narrow to park any vehicles.

As others have said in the comments, whole point of having technology is defeated if it performs worse than humans. Assuming vehicles were parked, a sane human driver will evaluate the possibility of someone coming out from between them suddenly and will not drive @40 Miles an hour speed.


>a sane human driver will evaluate the possibility of someone coming out from between them suddenly and will not drive @40 Miles an hour speed.

If that's the case most drivers on the road are very far from "sane drivers." I've been illegally passed, on narrow residential streets, many times, because I was going a speed that took into account the fact someone may jump out between parked cars.


Do you want AI to simulate insanity?


"no time for the autopilot to react" - that may be technically true but humans tend to slow before if they recognize a situation they don't understand

http://fortune.com/2018/03/19/uber-self-driving-car-crash/

> she came from the shadows right into the roadway

also we were told radars would have have solved exactly this limitation of humans

> Uber car was driving at 38 mph in a 35 mph zone

also we were told these car would be inherently safer because they would always respect limits and signage

> she is said to have abruptly walked from a center median into a lane with traffic

I don't know other driver, but when someone is on the median or close to the road I usually slow down in principle because it doesn't match the usual expectations of a typical 'safe' situation.

I've been advocating against public testing for a long time, because it's just treating people safety as an externality. Uber is cutting corners, not all company are that sloppy, but this is, overall, unacceptable.


Isn't the point of smart vehicles that they have superhuman reaction speed?


e=mv2, whether or not the driver is a superhuman robot or a human.

This means that there's a fixed distance from which the optimal driver can stop a car doing xmph. Yes, an autonomous vehicle has a faster reaction time* to begin the stop, but no matter the reaction time, a stop cannot be instantaneous from any substantial amount of speed.

If it takes 20 feet to stop a car doing 20MPH, it will take 80 feet to stop a car doing 40mph. If there's a human between the initial brake point and 80 feet from it, that human will be hit, no matter who or what the driver is.


The promise of self-driving cars is (was) that they’re much better than humans at predicting he behavior of other moving entities. A pedestrian doesn’t suddenly materialize on the road in front of the car. It comes from somewhere and the radar could have detected it (even « in the shadows »), and slowed down in anticipation of an uncertainty.

Or maybe it couldn’t, but then the whole « narrative « of the experiment is in serious jeopardy.


> whole « narrative « of the experiment is in serious jeopardy.

Not really. Self driving cars are supposed to be better than average human driver. That does not imply that they NEVER make mistakes.

I do not know specifics of this case, but a general comment: If somebody is hiding behind a bush, and (deliberately or by mistake) run in front of the car, there is no way the car can anticipate that. There is no way to avoid accidents in 100% of the cases.


We have some corners where old houses are even intruding a bit on the road. When passing these corners you will have to slow down so you can stop in case a child runs out behind the corner. You can't just blame the victim if you are in control of your own speed.

I can think of many situations where I have avoided hitting pedestrians because of my avareness of the situation. Eg: Pedestrian with earphones looking at phone crossing against red light just because the left-turning wehicle in the left lane stopped for a red arrow while I had green going straight. Pedestrian mostly behind the car, just seen thru the window of the car.

Pedestrian behind high snow-walls going towards normal pedestrian crossing, no lights. Almost completely covered by the high snow-walls and a buss parked at a buss station 50 m away from the crossing. 50 km/h road. Since I had seen the pedestrian far away already I knew someone would show up there at the time I arrived there. On the other hand I would never pass a buss like that in high speed, pedestrians like to just run across in front of the buss. And high snow-walls next to a crossing is a big red flag too.

I live in Sweden though, where pedestrians are supposed to be first class citizens that has no armor.


When you are driving you should be prepared to stop. If you're turning into a street you cannot see and you're going faster than you can stop, you're not prepared to stop - you're just hoping that no one is there. This should, and is too in Denmark, fully expected and enforced. This is not the same as as driving along the street and someone is jumping out in front of you.


I have now actually seen the movie of the crash and I can agree that it most likely was hard to avoid for a human. What surprices me is that the LiDAR completely missed her because she didn't run, she didn't jump, she was slowly walking across the road. I can't say if the light was too bad, a camera often looks much darker than what you see with the naked eye, not blinded by other lights. The driver was looking down on the instrument panel at the time of the crash, does he have some view of what the car sees there?

This looks like the exact situation the selfdriving cars are supposed to be able to avoid. A big object in the middle of the street. I expect the car to try to avoid this even though the bike didn't seem to have any reflexes. If the LiDAR doesn't catch this, I don't think they should be out in traffic at all.


> We have some corners where old houses are even intruding a bit on the road. When passing these corners you will have to slow down so you can stop in case a child runs out behind the corner. You can't just blame the victim if you are in control of your own speed.

Yes, but this is a 4-lane roadway. I can totally imagine driving cautiously and slowing down near residential areas where houses are close to the road. However, this seems like a different case.


> to begin the stop

It, or the driver, could do more than just stop though. You can change directions, even at 38mph.

Then we have to get into other questions, would I as a driver willingly sideswipe a car next to me to avoid hitting a pedestrian? Is it reasonable to expect an AI to make the same value decision?


It's not unknown for people to crash and burn to avoid hitting squirrels. And with modern airbag systems, it's arguably safer for all concerned for cars to risk hitting poles, and even trees. But on the other hand, once leaving the roadway there's the risk of hitting other pedestrians.


This is a major ethical decision to make. What if the airbags don't open up. What if there are other unseen things to crashing one's car to save somebody else's life. I honestly believe given a split second reaction time, any decision made by a human should be considered right.

An algorithm however is a different deal, what should happen is already decided in an algorithm, so in some way its already settled who gets killed.


When driving on surface streets, I do my best to track what's happening wherever in front, not just on the roadway. Given all the sensors on a self-driving car, why can't it detect all moving objects, including those off the roadway, but approaching?


Its all about what things are hiding behind the opacity. Blind spots are one thing, but if you jump right in front of a car out of nowhere from a place totally invisible to a sensor, its a totally different case.

You can't avoid what is hiding.


Yes, of course. But that isn't what happened here, right? A woman and bicycle on a median should have been quite obvious. I don't even see substantial landscaping on the median.[0]

0) https://www.nytimes.com/2018/03/19/technology/uber-driverles...


It appears that's pretty close to what happened here.

http://fortune.com/2018/03/19/uber-self-driving-car-crash/


We can try. In theory, each self-driving vehicle doesn't have to drive in isolation; they can be networked together and take advantage of each other's sensors and other sensors permanently installed as part of the road infrastructure.

That would increase the chance a particular vehicle could avoid an accident which it couldn't, on its own, anticipate.


Also the fact that most people now carry tracking devices. And that more and more, there are cameras everywhere. So there's potential for self-driving vehicles to know where everyone is.

It would be much safer if all roadways with speed limits over a few km/h were fenced, with tunnels or bridges for pedestrian crossing. Arguably, we would have that now, but for legislative efforts by vehicle manufacturers many decades ago. Maybe we'll get there with The Boring Company.


TL;DR: Nope.

"Most people" (which is, in reality, "most of my geek friends with high disposable income") shifts to "everyone" by the end of sentence. Also, my devices seem to know where I am...within a few city blocks: I do not like your requirement of always-on mandatory tracking, both from privacy and battery life POVs.

Even worse, this has major false negatives: it's not a probation tracker device - if I leave it at home, am I now fair game for AVs? And even if I have it with me and fine position is requested, rarely do I get beyond "close to a corner of X an Y Street," usually the precision tops out at tens of ft: worse than useless for real-time traffic detection.

Moreover, your proposal for car-only roadways is only reasonable for high-speed, separated ways; I sure hope you are not proposing fencing off streets (as would be the case here: 50 km/h > "a few km/h", this is the usual city speed limit).


OK, it was a dumb idea. Mostly dark humor. Reflecting my concerns about smartphones etc tracking location. But I do see that GPS accuracy for smartphones is about 5 meters,[0] which is close to being useful to alert vehicles to be cautious. And yes, it wouldn't protect everyone, and would probably cause too many false positives. And it's a privacy nightmare.

Some do argue that speed limits for unfenced roadways within cities ought to be 30 km/h or less. And although fatalities are uncommon at 30 km/h, severe injuries aren't. I live in a community where the speed limit is half that. But there's no easy solution, given how prevalent motor vehicles have become. Except perhaps self-driving ones.

0) https://www.gps.gov/systems/gps/performance/accuracy/#how-ac...

Edit: spelling


Tempe police have said that the car didn't brake ("significantly") prior to the collision[1]. So it does not seem that the car reacted optimally but simply had too much speed to halt in time.

[1]http://www.phoenixnewtimes.com/news/cops-uber-self-driving-c...


I haven't any insight to whether or not the car did or didn't attempt to brake, but it's necessary to respond to the "superhuman reaction speed" remark as, even if the reaction speed is superhuman, that doesn't necessarily mean that it's enough.

Accidents can still occur even if the car is perfect.


There are many instances in Asia of people committing suicide by car (jumping in front of a car with no chance for the driver to stop).

Not saying this is the case here, but it could be. As others have said, we need to wait until we know more before jumping to conclusions.


Actually you can't go from 40 miles/ph to 0 miles/ph in an instant. At least not with a passenger car. The reaction time is typically a few seconds. If one throws themselves in front of a car, the car would need a few seconds to react. Based on the speed, distance and other parameters, I don't think any car would be able to stop in an instant.

Thinking about it seriously, it may be shouldn't. Also these things could lead to a crash pile up with other vehicles coming in from behind.


superhUman reaction speed does not mean that the laws of physics stop applying. A vehicule at 40mph will still take the same distance to stop.


It is, actuators still have their lag, probably not much different from foot-pedal combo.


The speed limit on that street is 45mph. The 35mph speed limit being quoted is in the other direction.


this comment section should be read in front of congress the next time regulating the tech industry is in the table. these people literally think its ok to perform experiments that kill people.


Although it's comforting that this exact situation shouldn't happen again in an Uber autonomous car... there is no mechanism to share that learning with the other car companies. There seriously fucking needs to be a consortium for exactly this purpose: sharing system failures.

Also my problem with this is that a human death is functionally treated as finding edge cases that are missing a unit test, and progressing the testing rate of the code... and that really bothers me somehow. We need to avoid treating deaths as progress in the pursuit of better things


> We need to avoid treating deaths as progress in the pursuit of better things

Au contraire. Go read building codes some time. There's a saying that they're "written in blood" - every bit, no matter how obvious or arbitrary seeming, was earned through some real-world failure.

The death itself isn't progress, of course. But we owe it to the person to who died to learn from what happened.


Federal Aviation Administration's Joint Order 7110.65 is exactly what you are talking about. It is the manual that Air Traffic Controllers live by. To include situations that involve diverging aircraft or taxiing instructions and how to handle said situations. The entire manual was practically written from real-world experience.


This reminds me of the story behind the iron ring and the collapse of the second narrows bridge in Vancouver, BC.

http://www2.ensc.sfu.ca/undergrad/euss/enscquire/Vol10No2/pa...


It used to be the case that those “codes” were also written in the blood of the builders – as per the Code of Hammurabi.


You might enjoy _The History of Fire Escapes: How to Fail_, a talk Tanya Reilly gave at a DevOps Days conference earlier this year.

https://www.youtube.com/watch?v=02KEKtc-5Dc


Not just building codes, but a lot of car safety regulations and practices too. Stuff like crumple zones, ABS, breakaway road signs, safety glass, etc.


You seriously think Uber wouldn't try to claim "commercial in confidence" or "trade secrets" rights to all the data from every single death?


Clearly I think Uber is a benevolent entity that has all of our best interests at heart. Also, I eat babies.

Or, you know, don't jump on comments for their not explicitly addressing the hobby horse you're riding. Frankly I just wanted to express a better engineering context around the loss of life without getting into the political bullshit for once.


>There seriously fucking needs to be a consortium for exactly this purpose: sharing system failures.

This implies they're using the same systems and underlying models. If one model hit a pedestrian because of a weakness in training data plus a sub-optimal model hyperparameter, and therefore was classified a pedestrian in that specific pose as some trash on the street, how do you share that conclusion with other companies models?


I guess it depends on the data available, but I don't think the hyperparameters are what's needed. You just need to see what the car sensed and how it responded. Then the other car companies can try to replicate the similar circumstances and see what their models would do.


> You just need to see what the car sensed

I don't think all self-driving cars have the same sensors.

If you have LIDAR model X, and they were using LIDAR model Y, will your system "magically figure it out?

If your car has cameras at 5ft high, and the data is from a camera 6ft high?

Sure, someone could release the data, but will it screw up your models more than it fixes them?

(I totally agree the data should be released, I'm just not sure other self-driving cars will directly benefit. Certainly they can indirectly benefit from it.)


Is the design of self driving cars so limited that we will have to go through this every time carmakers must redesign hardware and/or vehicles themselves? Will the experiences of a sedan be impossible to transport to the experience of a semi truck? And vice versa?

The idea you present is possible, but I have to wonder how viable it makes the idea of self driving cars.


when self driving cars becOme ubiquitous it will be thru a renting model and therefore the fleet can be updated accordingly by companies owning them.


I understand, but the error may be in how it interpreted what it sensed. This is callous language, but if the models interpreted the pedestrian as trash on the street, then how it responded (driving over it) is not inappropriate.


> I understand, but the error may be in how it interpreted what it sensed.

It may, it also might not. If sharing data fails, further methods would be needed - but it's a good start for figuring out what data should be recorded for comparison. If the data is entirely incompatible, then we should have regulation to require companies to at minimum transcribe the data into a consumable format after an event such as this.

> ... if the models interpreted the pedestrian as trash on the street, then how it responded (driving over it) is not inappropriate.

If the models saw anything at all, they should not have driven over it. Even human drivers are taught that driving over road debris is dangerous. At minimum it puts the car/occupants at risk - in extreme cases, the driver may not recognize what it is they are driving over.

If this isn't a case where the car was physically unable to stop - it's more likely the telemetry didn't identify the person as an obstacle to avoid at all.


The idea is to come up with a certification test suite, and the cars have to pass it. Add/modify the tests as experience requires.


I would expect nowadays you could fairly trivially produce near life-like footage and allow self driving cars to 'play a video game' that tests tens of thousands of situations as a pre-requisite for certification.

The great thing about computer generated test cases is you don't need real footage of every hypothetical awful thing that could happen i.e. a truck losing control and rolling over sideways. These could be a stage 1 test -- a pre-requisite to real testing. Like the hazards test before you are allowed to sit the drive test with a real drive test employee [1] to make sure you're not a retard.

[1]: https://www.vicroads.vic.gov.au/~/media/images/licences/hpt_...


That may lead to algorithms being optimized for the “video game” instead of real life.


Next step is you put the system on a test track where you throw a wide variety of cases at it.


What happens if it uses lidar (or any other sensor)? I don’t think it’s easy.


Airplane certification uses a combination of simulator exercises and real world flying through prescribed maneuvers.


Mumble mumble Dieselgate.


I think there is likely a common need for a set of standard training models. This may hinder innovation for some time, but it's a cost we should accept when releasing a potentially dangerous technology to the public. It would have the added benefit of multiple companies contributing to a common self-driving standard which could accelerate its development.

That being said, when the first real cars were introduced to the world and later improved upon there were many more fatalities than we're likely to experience with self-driving technology.


As commented in another thread here [0], HAL at Duke University [1] is putting forward proposals for this.

[0]: https://news.ycombinator.com/item?id=16620439

[1]: https://hal.pratt.duke.edu/research


I'm sorry, but this is silly.

Every technology is dangerous. Every technology costs lives to some extent when spread across billions of people. I'm sure forks take more lives each year than self driving cars.

Weigh this against the potential lives saved. I posit you'd be killing more people with drunk drivers by slowing innovation than you'd be saving via luddism.


The first real cars, dangerous as they were, were far safer than horses.


Just sensor data to replay the scenario as fully as possible should be sufficient. Whatever it looks like to humans, there's clearly something in there that can be difficult for AI systems and so everyone should be using it as a regression test.


They need to share situations that made systems fail, and ensure that it doesn't happen with their specific system.


Well, for one thing, I think I have misgivings in part because Uber hasn't really demonstrated an attitude that makes me think they'll be very careful to reduce accidents. (Also, the extent to which they've bet the farm on autonomous driving presents a lot of temptation to cut corners)


Circumstances basically force them into betting the farm. Whether they want SDCs or not, if someone scoops them on SDCs, it's an existential threat to their business.


Sure, and also paying the drivers is a big part of the reason they aren't making money yet. Nevertheless, I don't think it changes the fact that there is a big temptation for someone to start cutting corners if it's not moving along fast enough.


Oh yeah, I'm not disagreeing with you. I think it actually makes the temptation much worse. They've built this business with billions in revenue, and they will probably lose it all if this SDC doesn't succeed.

What's more, a lot of people seem to think Waymo's tech is further along. So not only does this project have to succeed, it is also the underdog. So no wonder they're aggressive.


It makes me uncomfortable too. I think it's because it's a real world trolley problem.

We all make similar decisions all of our lives, but nearly always at some remove. (With rare exceptions in fields like medicine and military operations.) But autonomous vehicles are a very visceral and direct implementation. The difference betweeen the trolley problem and in autonomous vehicles is in time delay and the amount of effort and skill required in execution.

Plus, we're taking what is pretty clearly a moral decision and putting it into a system that doesn't do moral decision-making.


That raises an interesting question: Should there be the equivalent of the FAA for self-driving cars? (Perhaps this could be a function of DoT.)


Although the FAA is of course involved, it's generally the NTSB that is the lead investigator in airplane crashes. The NTSB already has jurisdiction to investigate highway accidents.

There's also NHTSA (which is indeed part of the DoT).

It looks to me like we don't need any new agency at all, just a very small expansion of NHTSA's mandate to specifically address the "self-driving" part of self-driving cars.


Anyone know how this is handled between Boeing and Airbus? Can we mandate the same mechanism between Uber and Waymo?


Thinking about different scenarios as unit tests it shouldn’t be hard for them to simulate all sorts of different scenarios and share those tests. Perhaps that would become part of a new standard for safety measures in addition to crash tests with dummies. In fact, I really think this will become the norm in the near future. It might even be out there already in some form.


> We need to avoid treating deaths as progress in the pursuit of better things

Then by all means lets stay at home and avoid putting humans in rockets ever again because if you think space exploration will be done without deaths you are in for a surprise.


Notoriously criminal company killing pedestrians with robo-cars to make money for themselves != Space exploration by volunteer scientists and service members


I hope we see the day when every car crash makes national news and there’s a big NTSB investigation into what happened.


This is by far the most insightful comment in the entire thread.

The real news isn't "Self-Driving Car Kills Pedestrian", the real news is "Self-Driving Car Fatalities are Rare Enough to be World News". I'm one-third a class M planet away from the incident and reading about it.


> The real news isn't "Self-Driving Car Kills Pedestrian", the real news is "Self-Driving Car Fatalities are Rare Enough to be World News".

They are rare only because self-driving cars are; I don't think the total driven miles of all self-driving cars are enough that even 1 fatality would be expected if they were human driven; certainly Uber is orders of magnitude below that point taken alone.

There are lots of fatalities from human-driven cars, sure, but that's over a truly stupendous number of miles driven.


neither of us has the data, but id bet that whatever miles / fatalities metric is, the self driving cars are still in the lead right now


> neither of us has the data, but id bet that whatever miles / fatalities metric is, the self driving cars are still in the lead right now

There's different estimates from different sources using slightly different methodology, but they are all in the neighborhood of road fatalities of 1 per 100 million miles traveled. [0]

Waymo claims to have reached 5 million miles in February [1], Uber (from other posts in this thread) is around 1 million miles; the whole self-driving industry is nowhere near 100 million, and has one fatality. So it's way worse, as of today, than human driving for fatalities.

Of course, it's also way too little data (on the self-driving side) to treat as meaningful in terms of prediciting the current hazard rather than simply measuring the outcomes to date.

[0] see, e.g., https://en.m.wikipedia.org/wiki/Transportation_safety_in_the...

[1] https://waymo.com/ontheroad/


Uber reached 2M miles in November. Going from 1 to 2M in only 100 days.


wow, ok. i was very wrong! thank you for explaining!


> This is by far the most insightful comment in the entire thread.

lol


NTSB are investigating this incident including electronic recorders https://www.ntsb.gov/investigations/Pages/HWY18FH010.aspx and they have another investigation from 2017 still to report https://www.ntsb.gov/investigations/Pages/HWY18FH001.aspx


With Uber's reputation, I wouldn't be surprised if they try to write an app to falsify black box telemetry in the event of a crash to put the liability on the victim. Maybe they'll call it "grey box" or "black mirror".

Does the NTSB have regulation on how black boxes are allowed to function?


> "Every crash and especially fatality can be thoroughly investigated"

Would be better if the code and data was open for public review.


That's knee-jerk reaction that may open a can of worm. Do you need personal details of the victim as well as the driver? Says if the victim had attempted suicide before? at the same crossroad? Or the driver had history of depression? Would that be a violation of their privacy? Would that cause a witch hunt?


Code and data, not people's personal details.


Sure. But if the result is inconclusive, do you leave it at that or demand to know more? What exactly does making the data public better than entrusting a competent organization such as NTSB?


Yes, you leave it at that, why would personal details be relevant?

Making it public means more eyes on the data, which can lead to a better understanding of what went wrong.


https://news.ycombinator.com/item?id=16547215

This comment by Animats in the context of self-driving trucks is quite telling. He warns precisely of this danger.


I'm hijacking my own post, but this is a very relevant MIT lecture on 'Technology, Policy and Vehicle Safety in the Age of AI' [0].

[0]: https://www.youtube.com/watch?v=LDprUza7yT4


> and should be prevented from ever happening again.

Take a look at the cost of airplanes, even small ones.


Black box with a ton of telemetry being piped into black box models though.


To ensure that all automotive software incorporates lessons learned from such fatalities, it would be beneficial to develop a common data set of (mostly synthetic) data replicating accident and 'near miss' scenarios.

As we understand more about the risks associated with autonomous driving, we should expand and enrich this data-set, and to ensure public safety, testing against such a dataset should be part of NHTSA / Euro NCAP testing.

I.e. NHTSA and Euro NCAP should start getting into the business of software testing.


Dr. Mary Cummings has been working on talking to NHTSA about implementing V and V for autopilots/AI in unmanned vehicles for a few years now. She's also been compiling a dataset exactly like what you are talking about.

I think the idea is to build a "Traincar of Reasonability" to test future autonomous vehicles with.

You might want to check out her research https://hal.pratt.duke.edu/research


Thank you for that link. I will pass it along to my former colleagues who I suspect will be very interested in her work.


No problem!

I'm sure Dr. Cummings would be more than happy to talk about issues facing validation and verification in the context of NHTSA/FAA.


Why would Uber agree to such regulations?

They were unwilling to legally obtain a self-driving license in California because they did not want to report "disengagements" (situations in which a human driver has to intervene).

Uber would just set their self-driving cars free and eat whatever fine/punishment comes with it.


> Why would Uber agree to such regulations?

This is a strange question to ask. The regulation is not there to benefit Uber, it is to benefit public good. Very few companies would follow regulation if it was a choice. The setup of such regulation would be for it to be criminal to not comply. And if Uber could not operate in California (or the USA) if they did not comply, it would in their interest to provide the requested information.


Uber has shown very often that they are willing to break the law. It seems within their modus-operandus to just ignore these rules.

Essentially, Uber engages in regulatory arbitrage but taking into account the cost-benefits of breaking the law. I.e. if it breaks the law but is profitable for them, they seem to do it.


Sure, so make the regulation expensive. For example, if a company is not in compliance then the executive team can be charged for any crime their self-driving toy committed under their guidance.


I don't believe this will be effective. Thinking back to the VW scandal, did any executive get punished for this? Same question for the Equifax breach and the insider trading issue.

My 'money' is on people with money figuring out loopholes, like plausible deniability.


Yes, it means that we need to write new regulations with real teeth, and vote out the politicians on all sides of the aisle that continue to punt on this issue.

One of my biggest complaints about the self-driving car space is that real lives are at stake; light-touch "voluntary" rules suitable for companies that publish solitaire clones aren't going to cut it here.


> Why would Uber agree to such regulations?

Uber doesn't get to pick and choose what regulations they wish to follow.

> Uber would just set their self-driving cars free and eat whatever fine/punishment comes with it.

That sounds quite negligent and a cause for heightened repercussions if anything happens.

The strange attitude you display is the _reason_ there are regulations.


Very cynical but - if your self-driving is way behind of your competitors- wouldnt it help to have your lousy car in a accident - so that your competitors get hit with over-regulation and you thus kill a market- on which you cant compete?


I‘m quite sure this would backfire A LOT in terms of brand damage. Uber in a sense made history today and now has actual blood on their hands. And if such a strategy should EVER leak (Dieselgate anyone?), people are going to prison.


GM killed 124 people with faulty ignition switches[1], yet the brand still survived. It's a cost calculation: will the brand damage outweigh the benefit to the company? Sadly, human lives don't factor into that equation.

[1] http://money.cnn.com/2015/12/10/news/companies/gm-recall-ign...


Sadly, that’s a common occurrence with big automakers.

I can’t say anything about GM‘s rep in the US, but here in Europe they are not doing so well. Chevrolet was killed in 2015, and Vauxhall/Opel are doing only ok-ish. Chevy had SO many recalls in the years before they killed it.


Opel got bought back to europe by PSA the owners of Citröen and Peugeot in 2017 so they have a chance to turn it around.


No, because what you risk is associating your brand with death, rather than AD.

Uber already has a terrible reputation with everyone in the tech industry for the sexism, bullying, law breaking, and IP theft. Do they really want to be the self-driving car company with a reputation for killing people?

It doesn't take a lot for people to think "Maybe I'll take a lyft" or "Maybe I'll ban uber from London because of their safety record"[0].

They aren't going to kill the market for this - the other players not only have big incentives to make sure they look safe, but you've got a really unique problem when your biggest competitor is a company that controls access to news and which adverts your customers will see.

[0] https://www.theguardian.com/politics/2017/oct/01/sadiq-khan-...


It's a cynical approach, but they could be playing both angles. Take enough risks that maybe you do succeed and you can cut your costs enormously by actually having SDCs. But if you fail, you also protect yourself by taking the competition down with you.


Uber ultimately backed down and applied for a California permit: http://fortune.com/2017/03/08/uber-permit-self-driving-cars-...


The start tossing executives in jail.

It’s the only real solution to corporations misbehaving.


Cooperations will just start paying compensation for executive jail time - and replace executives at a accelerated rate.

The only working regulation is one that is a existential threat to the company. Which means- huge financial punnishments.


Something I'm surprised no one has considered seriously is revoking business licenses. That is a much more existential threat, literally.


Corporate death penalty. It's the only way to be sure.


Phoenixing :(


...from orbit.


Maybe they start paying compensation to the family, making sure that they're set even if you fail - firmly stepping into mafia territory.

Still, corporations, while being essentially a different kind of life, are not entirely separate entities - they are composed of people. Fear of jail might just be enough to get some high-level executives to exert proper influence on the direction of the corporation.


You can’t return a decade sitting in a miserable cell.


Oh, contrair- if you have a decade sitting unemployed in a room- and you get a chance, by taking all the responsibilities and liabilitys, to live your golden years in the sun- what would you have to loose?


I guess my hopeful answer would be they start getting treated like Enron and someone high up goes to prison until they start to comply.


Did anyone actually go to prison from Enron?


You mean, besides the CEO, Jeffrey Skilling?

https://www.reuters.com/article/us-enron-skilling/former-enr...

Enron's founder, Kenneth Lay, was also convicted and faced as much as life in prison. However, he died before sentencing: http://www.nytimes.com/2006/07/05/business/05cnd-lay.html


Yes, their CEO is still in prison: https://en.wikipedia.org/wiki/Jeffrey_Skilling .


It would be required by law. Violating the law would hold corporate officers criminally responsible.


Possibly there's enough business risk that if Uber doesn't, someone else will, and then they will have SDCs but Uber won't, and then Uber will go bankrupt just about instantly.


There may eventually be standard test suites that can be applied to any of the self-driving systems in simulation. This would give us a basis of comparison for safety, but also for speed and efficiency.

As well as some core set of tests that define minimum competence, these tests could include sensor failure, equipment failure (tire blowout, the gas pedal gets stuck, the brakes stop working) and unexpected environmental changes (ice on the road, a swerving bus).

Manufacturers could even let the public develop and run their own test cases.


How about not testing it on the unpaid public in incremental patches like this age of software "engineering" has decided it was a good idea to do?


You ultimately have to at some stage, since any test track is a biased test by its nature.

It is more an issue of how sophisticated these vehicles should be before they're let loose on public roads. At some stage they have to be allowed onto public roads or they'd literally never make it into production.


"Never making it into production" sounds like the perfect outcome for this technology.


Then make the officers of the company stake their lives on this, not the lives of innocent pedestrians.

If they're not willing to dogfood their own potential murder machines then why should the public trust them on the public roads?


This is what's going to happen. If you've ever seen a machine learning algorithm in action, this isn't surprising at all. Basically, they'll behave as expected some well known percentage of the time. But when they don't, the result will not be just a slight deviation from the normal algorithm, but a very unexpected one.

So we will have overall a much smaller number of deaths caused by self driving cars, but ones that do happen will be completely unexpected and scary and shitty. You can't really get away from this without putting these cars on rails.

Moreover, the human brain won't like processing these freak accidents. People die in car crashes every damn day. But we have become really accustomed to rationalizing that: "they were struck by a drunk driver", "they were texting", "they didn't see the red light", etc. These are "normal" reasons for bad accidents and we can not only rationalize them, but also rationalize how it wouldn't happen to us: "I don't drive near colleges where young kids are likely to drive drunk", "I don't text (much) while I drive", "I pay attention".

But these algorithms will not fail like that. Each accident will be unique and weird and scary. I won't be surprised if someone at some point wears a stripy outfit, and the car thinks they are a part of the road, and tries to explicitly chase them down until they are under the wheels. Or if the car suddenly decides that the road continues at a 90 degree angle off a bridge. Or that the splashes from a puddle in front is actually an oncoming car and it must swerve into the school kids crossing the perpendicular road. It'll always be tragic, unpredictable and one-off.


Very little of what goes into a current-generation self-driving car is based on machine learning [1]. The reason is exactly your point -- algorithmic approaches to self-driving are much safer and more predictable than a machine learning algorithms.

Instead, LIDAR should exactly identify potential obstacles to the self-driving car on the road. The extent to which machine learning is used is to classify whether each obstacle is a pedestrian, bicyclist, another car, or something else. By doing so, the self-driving car can improve its ability to plan, e.g., if it predicts that an obstacle is a pedestrian, it can plan for the event that the pedestrian is considering crossing the road, and can reduce speed accordingly.

However, the only purpose of this reliance on the machine learning classification should be to improve the comfort of the drive (e.g., avoid abrupt braking). I believe we can reasonably expect that within reason, the self-driving car nevertheless maintains an absolute safety guarantee (i.e., it doesn't run into an obstacle). I say "within reason", because of course if a person jumps in front of a fast moving car, there is no way the car can react. I think it is highly unlikely that this is what happened in the accident -- pedestrians typically exercise reasonable precautions when causing the road.

[1] https://www.cs.cmu.edu/~zkolter/pubs/levinson-iv2011.pdf


Actually, because there's a severe shortage of LIDAR sensors (much like video cards & crypto currencies, self driving efforts have outstripped supply by a long shot), machine learning is being used quite broadly in concert with cameras to provide the model of the road ahead of the vehicle.


That is what the comment is saying. Of course the vision stuff is done with machine learning - that is after all the state of the art. But that is a tiny part of the self-driving problem. So you can recognize pedestrians, other cars, lanes, signs, maybe even infer velocity and direction from samples over time. But then the high-level planning phase isn't typically a machine learning model, and so if you record all the state (Uber better do or that's a billion dollar lawsuit right there) you can go back and determine if the high-level logic was faulty, the environment was incomplete etc.


I was responding specifically to "Instead, LIDAR should exactly identify potential obstacles to the self-driving car on the road." - LIDAR isn't economically viable in many self driving car applications (for example: Tesla, TuSimple) right now.


Then your comment is off-topic, because the realm of discussion was explicitly "self-driving cars equipped with LIDAR". Uber's self-driving vehicles are all equipped with LIDAR, as are basically all other prototype fully-autonomous vehicles.


How is it off topic when we're discussing "current-generation self-driving" vehicles?

It's a point of clarification that the originally listed study doesn't take into account, but which could be important to the broader discussion. Especially considering that while this vehicle had LIDAR, the other autonomous vehicle fatality case did not.

> as are basically all other prototype fully-autonomous vehicles

As I pointed out with examples above, no, they are not.


The vehicle involved in the accident has an HDL64 on the roof.


Is it true?

You can get depth sensing (time of flight) 2D camera Orbecc Astra for $150 or 1D laser scanner RPLIDAR for $300. Of course they are probably not suited for automotive, but for me even extra $2000 for self-driving car sensors isn't that much.


But that's the issue: identifying a pedestrian vs a snowman or a mailbox or a cardboard cutout is important when deciding whether to swerve left or right. It's an asymptotic problem: you'll never get 100% identification, and based on that, even the rigid algorithms will make mistakes.

LIDAR is also not perfect when the road is covered in 5 inches of snow and you can't tell where the lanes are. Or at predicting a driver that's going to swerve into your lane because they spilled coffee on their lap or had a stroke.

With erratic input, you will get erratic output. Even the best ML vision algorithm will sometimes produce shit output, which will become input to the actual driving algorithm.


> Or at predicting a driver that's going to swerve into your lane because they spilled coffee on their lap or had a stroke.

Neither are humans, and a self-driving car can react much faster than any human ever could.


I can see when the car in front of me is acting erratic or notice when the driver next to me is talking on their phone and adjust my following distance automatically. I don't think self-driving cars are at that point yet. The rules for driving a car on a road is fairly straightforward - predicting what humans will do -- that's far from trivial and we've had many generations of genetic algorithms working on that problem.


Self-driving cars could compensate for that with reaction time. Think of it this way: you trying to predict what the other driver will do is partly compensating for your lack of reaction time. A self-driving car could, in worst-case scenario, treat the other car as randomly-moving car-shaped object, compute the envelope of its possible moves, and make sure to stay out of it.


Normal cars could do this too. Higher end luxury cars already started using the parking sensors to automatically apply the brakes way before you do if something is in front of the car and approaching fast. If this was really that easy, then we wouldn't have all these accidents reported about self driving cars: the first line of your event loop would just be `if (sensors.front.speed < -10m/s) {brakes.apply()}` and Teslas and Ubers wouldn't hit slow moving objects ever. I suspect that's not really how this works though.


Exactly, with LIDAR the logic isn't very tricky. if something in front, stop.


More than that - if something approaching from the side at intercept velocity, slow to avoid collision.


You're handwaving away the crux of the matter: while for a human, the condition seems straightforward (as we understand that "in front" means "in a set of locations in the near future, determined by a many-dimensional vector set", expressing this in code is nontrivial.


> I won't be surprised if someone at some point wears a stripy outfit, and the car thinks they are a part of the road, and tries to explicitly chase them down until they are under the wheels. Or if the car suddenly decides that the road continues at a 90 degree angle off a bridge. Or that the splashes from a puddle in front is actually an oncoming car and it must swerve into the school kids crossing the perpendicular road.

Are you working on the next season of Black Mirror?

In all seriousness, my fear (and maybe not fear, maybe it's happy expectation in light of the nightmare scenarios) is that if a couple of the "weird and terrifying" accidents happen, the gov't would shut down self-driving car usage immediately.


I am definitely not. Their version of the future is too damn bleak for me.

Your fear is very much grounded in reality. US lawmakers tend to be very reactionary, except in rare cases like gun laws. So it won't take much to have restrictions imposed like this. Granted, I believe some regulation is good; after all the reason today's cars are safer than those built 20 years ago isn't because the free market decided so, but because of regulation. But self driving cars are so new and our lawmakers are by and large so ignorant, that I wouldn't trust them to create good regulation from the get go.


> except in rare cases like gun laws

They're still very reactionary in that, which is precisely why it isn't very effective when a subset of them do react: there are plenty of smart things that could get proposed, but the overlap between people who know what they're talking about and people that want the laws is exceptionally small, so consequently dumb, ineffective stuff that has no chance of passing anyway gets proposed. What does get proposed is a knee-jerk reaction to what just happened, and rarely actually looks systemically at the current laws and gun violence as a whole. Example: the Las Vegas shooting prompted a lot of talk of bump stock bans. Bump stocks are so rarely used at all, nevermind in violence, and they will generally ruin guns that weren't originally made to be fully-automatic very quickly if they're actually used for sustained automatic fire. Silly point to focus on suddenly. After the Florida shooting last month so much focused on why rifles are easier to obtain than handguns. And it's because overwhemingly most gun violence is handguns. Easily concealable rifles are already heavily regulated at the federal level for that very reason.


> Example: the Las Vegas shooting prompted a lot of talk of bump stock bans. Bump stocks are so rarely used at all, nevermind in violence, and they will generally ruin guns that weren't originally made to be fully-automatic very quickly if they're actually used for sustained automatic fire.

<off-topic> This is non-sense. Typical semi-auto are wayyy over build. Unless mechanical wear or explicit tempering of the disconnector, there is no risk whatsoever to fire thousands of rounds with a bump stock. Actually, plastic/wood furniture are more likely to burn/melt before the mechanical parts will actually fail. As worst, you might bend a gas piston, but the rifle will otherwise be fine.

The underlying reasoning behind the push against the bump stock ban is that it was basically a semi-auto ban, as you can trivially with a bit of training bump fire any semi-auto without a bump stock from either the shoulder or the hip with a mere finger. </off-topi>


>> you might bend a gas piston

Tubes on low- to mid- range civilian DE guns can burn out very quickly, and are in fact designed to do so long before you get damage to the more expensive parts of the gun - I've seen it happen in most of the cases (which are admittedly quite few in number despite how often I'm there) where I've seen someone using a bump stock at a range. In the most recent case I think the guy was on his 3rd mag and it ruptured. It was a M&P 15 Sport II, if I recall. Not a cheap no-name brand, but about as low-cost as you can get and missing all the upgrades in the version they market to cops. High-end ARs would fair better, I'd expect, but high-end ARs are again so rarely used for actual violence because they're usually only purchased by people shooting for a serious hobby in stable life situations. And honestly I feel the same people buying those probably feel bump stocks are tacky and gaudy like I do.

Even in the most liberal interpretation of the proposed law, I don't think any bump stock ban would become a semi-auto ban. I could see the vague language getting applied to after-market triggers, especially ones like Franklin Armory, but you've gotta have some added device for any of the proposals I've seen to even remotely apply.


Largely because of Ralph Nader.

It's amazing how reformers get demonized even after their platform is accepted wholesale by the rest of the world.


US lawmakers are very reactionary in the case of gun laws too, it's just that gun owners usually have enough political pull to block them from successfully getting laws passed. (The current campaign for gun control is 100% a reactionary response to whatever's been making the biggest news headlines. For example, the vast majority of US gun homicides are carried out with handguns, yet gun control supporters seem to think it's absurd they're more tightly regulated than AR-15s - which are relatively rarely used to kill anyone and have more mundane uses for things like hunting - just because the AR-15s are in the headlines. The US's most deadly school shooting was done with handguns too.) In fact, I'd argue the reactionary nature of US lawmaking is important to understanding why "sensible", "common-sense" gun control laws are so strongly opposed in the first place.


> US lawmakers tend to be very reactionary, except in rare cases like gun laws.

And for a good reason, they are constitutionally prohibited to .


> the reason today's cars are safer than those built 20 years ago isn't because the free market decided so, but because of regulation.

All safety functionality was introduced and used way before regulators even knew that it's possible.

Edit: please explain the downvotes, ideally with examples


> Are you working on the next season of Black Mirror?

In Black Mirror, all cars in the world will simultaneously swerve into nearby pedestrians, buildings, or other cars.


It doesn't even need to be that. Imagine the shit show that a city would be when it's entire transportation fleet is immobilized because someone has messed with their safety features.


That could happen during a massive, remotely-triggered software update.


Just imagine a batch of cars with malfunctioning inertial sensors (it's brought down more than a few of my drones). GPS and perception (through ML or LIDAR) will work most of the time to override such errors, but if there was a second malfunction... "The car is swerving left at 1m/s; correct right."


Like the one that happened some days ago with Occulus helmets.


That could happen because of a malicious 'time-bomb' placed by a hostile state actor in such an update.


More likely it would be something about dissidents being killed in their cars, or cars going after people who aren't liked.


If that's bad, what happens when the robocars get hacked?


Like we have botnets made out of thousands (millions?) of compromised computers, we could have entire fleets of compromised cars, rentable on the black market using cryptocurrency, that could be used to commit crimes (e.g. homicide) while keeping the killer anonymous.

Scary stuff. I hope these self-driving cars will be able and designed to work while completely offline, with no built-in way to ever connect to a network. But given the biggest player in the field seems to be Google, they'll probably be always connected in order to send data to the mothership and receive ads to show you.


"I hope these self-driving cars will be able and designed to work while completely offline, with no built-in way to ever connect to a network."

Don't hold your breath about that. There will be a huge load of data ready to be sold to advertising companies just by listening what passengers talk about when passing near areas/stores/billboards/events etc.


> receive ads to show you

I'm just envisioning a scenario where the car automatically pulls to the side of the highway, locks the doors and dishes you with a 15 second ad, and then the doors unlock and the journey resumes as normal.


Or just (virtually) replace billboards with personalized content


Using these cars to commit homicide was actually one of the plot points in the second book of Three Body Problem trilogy by Liu Cixin. Very much recommended if you are into sci-fi.


Those books are interesting in that Liu seems to get away with seriously portraying narratives that would be out of bounds in the "approved" popular culture of the West. Murder by robocar is a minor example, but others include portrayal of the inherent weakness of societies in which men are effeminate and the superiority of leaving strategic decisions to military authorities. (I don't particularly agree with those propositions, but they are certainly present in the books.)


I'm half-way done with the last book in the trilogy and I am finding the assumptions and viewpoints of the world from a Chinese view quite interesting. The one that struck me most was how he presents humanities greatest strength over the technologically superior aliens is the human ability to conceal their true thoughts and the possibility of deception. Quite different from Christianity's high value placed on honesty.


The great stories of paganism and animism were composed by artists, and they all feature trickery and uncertainty. The Christian Bible has some of that (I like Job), but the majority was written by humorless unimaginative prudes. I suppose some of the Chinese philosophers are a little better than St. Paul, but mostly when they're being playful.


Confused by the downvote here. This is a perfectly legitimate question, and indeed it should be asked more, not less, often.


This is stupid question. The answer is - the same as if someone will physically mess with your car.


Physical hacking can only happen to one car at a time.


What if someone hacks the assembly line for Ford?


What if someone put a bomb in your car or mess with your breaks?


Right, but what if someone could put a bomb in 100k cars or mess with 100k car's breaks remotely over the internet. A large enough quantitative change becomes a qualitative change.


To be fair, all the examples you gave could also happen to a human driver.


will never happen. least not with any explicit depth sensors. any car with lidar has depth perception orders of magnitude better than yours and would never chase an object merely because it resembles road markings


I mean, maybe the self-driving car shouldn't exist if it's just going to run people over.


>So we will have overall a much smaller number of deaths caused by self driving cars

Why? This is what the self-driving cars industry insists on, but has nowhere near been proven (only BS stats, under ideal conditions, no rain, no snow, selected roads, etc -- and those as reported by the companies itself).

I can very well imagine a greater than average human driving AI. But I can also imagine being able to write it anytime soon not being a law of nature.

It might take decades or centuries to get out of some local maxima.

General AI research had also promised the moon once again in the 60s and 70s, and it all died with little to show of in the 80s. It was always "a few years down the line".

I'm not so certain that we're gonna get this good car AI anytime soon.


If self-driving cars 1. don't read texts whilst driving, 2. don't drink alcohol, 3. stick to the speed limit, 4. keep a 3-4s distance to the car in front, 5. don't drive whilst tired 6. don't jump stop signs / red lights it will solve a majority of crashes and deaths. [0]

The solutions to not killing people whilst driving aren't rocket science but too many humans seem to be incapable of respecting the rules.

[0]: http://www.slate.com/articles/technology/future_tense/2017/1...


But it doesn't work like that. You can't just say "If they don't do X Y or Z", because while they may not do X Y or Z, that doesn't mean they won't do A B or C that are equally bad or worse. Human and self-driving are two completely separate categories, you can't just assume that things one does well the other also does well, and so just subtract the negatives. You could easily flip your comment to go the other way: "If human drivers don't mistake trucks for clouds or take sharp 90o turns for no reasons then they're safer".

I do think that self-driving cars will be safer, but it's upon it's proponents to prove that.


As the sibling comment says, it does depend on self-driving cars matching human level performance. But with all AI/Neural Networks it is very possible to match human performance because most of the time you can throw more human-level performance data at it.

Each of the crashes that self-driving cars can be fixed and prevented from happening again. The list I gave are human flaws that will almost certainly never be fixed.

I further agree with you it's up to the proponents to prove that. It's a good thing to force a really high bar for self-driving cars. Then assuming the technology is maintained once AI passes the bar it should only ever get better.


> Each of the crashes that self-driving cars can be fixed and prevented from happening again. The list I gave are human flaws that will almost certainly never be fixed.

Not if you put neural networks / deep learning in the equation. This stuff is black boxes connected to black boxes, that work fine until they don't, and then nobody knows why they failed - because all you have is bunch of numbers with zero semantic information attached to them.


Neural Networks are only a small part of self driving car algorithms. The planning and sensor fusion etc. is usually not done with deep learning (for this reason). Only visual detection, because we have nothing else working better in this realm. But lidar, radar, sonar, what have you all work without any deep learning. The decision making on a high level is also without deep learning.

The only questionable parts will be where the vision system fails, and those are similar actually to human problems. Because human vision also often fails (sunlight on windshield, lack of attention, darkness, etc.)


> But with all AI/Neural Networks it is very possible to match human performance because most of the time you can throw more human-level performance data at it.

Are you in very vague words implying that AGI has been invented? AI might have matched humans in image recognition, but it is far away in general decision making.

And finally, I am tired of listening to "safer than a human". That should never be the comparison, but a human at the helm and an AI running in the background which will take over when the human does an obvious mistake -- you know, like a emergency braking system,


"Each of the crashes that self-driving cars can be fixed and prevented from happening again."

If those situations recur exactly as they happened the first time, sure they can be prevented from happening again.

That is, if a car approaches the exact same intersection as the exact same time of day, and a pedestrian that looks exactly like the pedestrian in this accident crosses the street in exactly the same way, with exactly the same other variables (like all the other pedestrians and cars around there that the sensors can see), the data could be enough the same that the algorithm will detect it at close enough to the original situation to avoid the accident this time.

But it's not at all clear how well their improvements will generalize to other situations which humans would consider to be "the same" (ie. when any pedestrian in any intersection crosses any street).


You missed a rather important point 0:

If self-driving cars are, at their best, roughly as capable as a human driver.

This is a big 'if'.

The solution to not killing people is a kind of rocket science. In fact, it's probably harder than rocket science[0]. It's predicated on a lot of things that are very, very, very hard. The fact is that humans, who are already pretty capable of most of these very very hard things, often choose to reduce their own capabilities.

If the best self-driving tech is no better than a drunk human, however, then we haven't gained much.

---

[0] though perhaps not harder than brain surgery.


I really don't think it's a big 'if'. As long as there is human level performance data, neural networks can be trained to match that level of performance. So it's a matter of time. It is indeed very, very hard, but also solvable.


I agree that it's solvable.

However, the process your describing, of collecting human-level performance data, requires the ability to gather all of the data relevant to the act of driving in a manner consumable by the algorithm in question. This is the simulation problem, and it's very, very, very hard (it's why genetic algorithms have traditionally not gotten much further than toy examples, in spite of being a cool idea). Perhaps it is the case that it is very important to have an accurate model of the intentions of other agents (e.g., pedestrians) in order to take preventative action rather than pure reaction. Perhaps it is very important to have a model of what time of day it is, or the neighborhood you're driving in. The likelihood that it is going to rain some time in the next hour. Whether the stock market closed up or down that day.

It also assumes that neural networks (or the more traditional systems used elsewhere) are sufficiently complex to model these behaviors accurately. Which we do not yet have an answer to yet.

So, when I say, 'a big if', I mean for the foreseeable future, barring some massive technological/biological breakthrough. That could be a very long time.


For those who do not get the reference: https://www.youtube.com/watch?v=THNPmhBl-8I


Well, one answer is either it will be positively demonstrated to be statistically safer, or the industry won't exist. So once you start talking about what the industry is going to look like, you can assume average safety higher than manual driving.


> This is what the self-driving cars industry insists on, but has nowhere near been proven

Because machines have orders of magnitude fewer failure modes than humans, but with greater efficiency. It's why so much human labour has been automated. There's little reason to think driving will be any different.

You can insist all you like that the existing evidence is under "ideal conditions", but a) that's how humans pass their driving tests too, and b) we've gone from self-driving vehicles being a gleam in someone's eye to actual self-driving vehicles on public roads in less than 10 years. Inclement weather won't take another 10 years.

It's like you're completely ignoring the clear evidence of rapid advancement just because you think it's a hard problem, while the experts actually building these systems expect fully automated transportation fleets within 15 years.


> It's why so much human labour has been automated. There's little reason to think driving will be any different.

Repetitive, blunt, manual labor now, and probably much basic legal/administrative/medical work in the near future. But we still pay migrant workers to harvest fruit, and I don't imagine a robot jockey winning a horse race anytime soon.

Driving a car under non-ideal conditions is incredibly complex, and relies upon human communication. For example: eye contact between a driver and pedestrian; one driver waving at another to go ahead; anticipating the behavior of an old lady in an Oldsmobile. Oh, the robots will be better drivers eventually, but it will be awhile. We humans currently manage about one death per hundred million miles; Uber made it all of two million. I expect we'll have level 5 self-driving cars about the same time we pass the Turing test.


> But we still pay migrant workers to harvest fruit

Harvesting fruit is far more complex than driving. It's a 3D search through a complex space.

> Driving a car under non-ideal conditions is incredibly complex, and relies upon human communication.

No it doesn't. The rules of the road detail precisely how cars interact with each other and with pedestrians.

> We humans currently manage about one death per hundred million miles; Uber made it all of two million.

Incorrect use of statistics.


> Harvesting fruit is far more complex than driving. It's a 3D search through a complex space.

Are you making a joke? "a 3D search [for a path that reaches the destination safely and legally] through complex space" is exactly how I would describe driving. (Also, driving is an online problem.)


Cars don't leave the road which is a 2D surface. In what way is that a 3D problem?


Ever heard of an elevated highway, ramp, flying junction, bridge, or tunnel?

I mean, yeah the topology is not as complex as a pure unrestricted 3d space but it's also more complex than pure 2d space. It's a search through a space, and it's complex, I don't know if nitpicking about the topology adds a lot here?


That's still 2D space. A car simply can't move along the z axis, so the fact that the road itself moves in 3 dimensions is irrelevant.

Even navigational paths that consider all of the junctions, ramps, etc. are simply reduced to a weighted graph with no notion of any dimensions beyond forward and backwards.


Your comment is just random hopeful assertions though...

>It's why so much human labour has been automated.

But how much human labour that is as complicated as driving has been automated? As far as I can tell automation is very, very bad when it needs to interact with humans who may behave unexpectedly.

>b) we've gone from self-driving vehicles being a gleam in someone's eye to actual self-driving vehicles on public roads in less than 10 years. Inclement weather won't take another 10 years.

>It's like you're completely ignoring the clear evidence of rapid advancement just because you think it's a hard problem, while the experts actually building these systems expect fully automated transportation fleets within 15 years.

Actually plenty of experts within the field disagree with you.

“I tell adult audiences not to expect it in their lifetimes. And I say the same thing to students,” he says. “Merely dealing with lighting conditions, weather conditions, and traffic conditions is immensely complicated. The software requirements are extremely daunting. Nobody even has the ability to verify and validate the software. I estimate that the challenge of fully automated cars is 10 orders of magnitude more complicated than [fully automated] commercial aviation.”

Steve Shladover, transportation researcher at the University of California, Berkeley

http://www.automobilemag.com/news/the-hurdles-facing-autonom...

With autonomous cars, you see these videos from Google and Uber showing a car driving around, but people have not taken it past 80 percent. It's one of those problems where it's easy to get to the first 80 percent, but it's incredibly difficult to solve the last 20 percent. If you have a good GPS, nicely marked roads like in California, and nice weather without snow or rain, it's actually not that hard. But guess what? To solve the real problem, for you or me to buy a car that can drive autonomously from point A to point B—it's not even close. There are fundamental problems that need to be solved.

Herman Herman, Director of the National Robotics Engineering Center @ CMU

https://motherboard.vice.com/en_us/article/d7y49y/robotics-l...


>>But how much human labour that is as complicated as driving has been automated?

Quite a lot actually.

These days you can produce food for several thousands of people using a few hundred people and plenty of machines.

Part of the reason why we haven't yet reached a Malthusian catastrophe is this.


Automated food production is very much simpler, because you're usually only producing one food item at large scale. That's the super easy stuff to automate.

Automated driving is more like a fully automated chef, that can create new dishes from what his clients tell him they like. Without the clients being able to properly express themselves. That's a lot more complicated than following a recipe.

Difficulty of automation goes roughly trains < planes << cars.

Automated trains are simple, but don't provide much value. Automating planes provided value because it's safer than just with human pilots. Automated cars are a different league of complexity.


> But how much human labour that is as complicated as driving has been automated?

Driving is not complicated at its core. Travel along vectors that intersect at well-defined angles. Stop to avoid obstacles whose vectors intersect with yours.

Sometimes those obstacles will intersect with your vector faster than you can stop, which is probably what happened to this woman. As long as the autonomous car was following the prescribed laws, then it's not at fault, and a human definitely would not have been able to stop either.

> Merely dealing with lighting conditions, weather conditions, and traffic conditions is immensely complicated. The software requirements are extremely daunting.

Which is why self-driving cars don't depend on visual light, and why prototypes are being tested in regions without inclement weather. Being on HN, I'm sure you're well familiar with the product development cycle: start with the easiest problem that does something useful, then generalize as needed.

> With autonomous cars, you see these videos from Google and Uber showing a car driving around, but people have not taken it past 80 percent. It's one of those problems where it's easy to get to the first 80 percent, but it's incredibly difficult to solve the last 20 percent. If you have a good GPS, nicely marked roads like in California, and nice weather without snow or rain, it's actually not that hard.

Right, so the experts agree with me that the problem the pilot projects are addressing is readily solvable, and that general deployment will take a number of years of further research, but isn't beyond our reach. This past year I've already read about sensors that can peer through ice and snow. 15 years is not at all out of the question.


Driving isn't just travel along a vector. Maybe trains, but not urban roads. urban roads are full of people and animals.

If a ball bounces in front of me, I slow down expecting a dog or a child running after it. No self driving car now, and in 30 years is going to be able to infer that.

Driving is essentially interacting with the environment, reading hand signals from people, understanding intent of pedestrians, bicycles and other drivers. No way any AI can do that now.


> Driving isn't just travel along a vector. Maybe trains, but not urban roads.

Trains travel along a straight line, not a vector in 2D space.

> If a ball bounces in front of me, I slow down expecting a dog or a child running after it. No self driving car now, and in 30 years is going to be able to infer that.

Incorrect. I don't know why you think humans are so special that they're the only system capable of inferring such correlations.


Or take a slow approach like Google Waymo which was almost perfect so far on the roads. Uber is rushing it, and this could cost whole industry.


exactly. I would personally never trust an Uber self-driving car, specifically because I've lost trust in the company itself.


Seems like you're jumping to conclusions here. Let's wait to see what exactly happened. I highly doubt that any of these companies just use "straight" ML. For complex applications, there's generally a combination of rules-based algorithm and statistical ML based ones applied to solve problems. So to simply: highly suspect predictions aren't just blinded followed.


Totally agree on your premise that we can rationalize humans killing humans - but we cannot do so with machines killing humans.

If self-driving cars really are safer in the long-run for drivers and pedestrians - maybe what people need is a better grasp on probability and statistics? And self-driving car companies need to show and publicize the data that backs this claim up to win the trust of the population.


It's a sense of control. I as a pedestrian (or driver) can understand and take precautions against human drivers. If I'm alert I can instantly see in my peripheral vision if a car is behaving oddly. That way I can seriously reduce the risk of an accident and reduce the consequences in the very most cases.

If the road was filled with self-driving cars there would be less accidents but I wouldn't understand them and with that comes distrust.

Freak accidents without explanations are not going to cut it.

Also, my gut feeling says this was a preventable accident that only happened because of many layers of poor judgement. I hope I'm wrong but that is seriously what I think of self-driving attempts in public so far. Irresponsible.


If you ask me which one is coming first, Quantum computing or "better grasp of probability and statistics" among the general public - I take the first with 99% confidence.


If a human kills a human, we have someone in the direct chain of command that we can punish. If an algorithm kills a person... who do we punish? How do we punish them in a severe enough way to encourage making things better?

Perhaps, similar to airline crashes, we should expect Uber to pay out to the family, plus a penalty fine. 1m per death? 2? What price to we put on a life?


>How do we punish them in a severe enough way to encourage making things better?

This is a tough problem, but if we stop or limit the allocation of resources to protect the secretive intellectual property that is autonomoilusly running people over, that is the most effective incentive I can see. Plus it's pretty easy to do.

We don't even have to force them to disclose anything, by affording less legal protection, their employees will open-source it for us.


It's not just about punishment though. It's about our brains being (mostly) good at saying "well they were unlucky, but I'd never find myself in that situation, so I'm OK." And with some brains wired for anxiety doing the opposite and instead only thinking about how they are the ones that would be in all those situations.


> maybe what people need is a better grasp on probability and statistics

Definitely, though my interpretation of your statement is "self driving cars have only killed a couple people ever but human cars have killed hundreds of thousands". If that's correct, that's not going to win anyone over nor is it necessarily correct.

While the state of AZ definitely has some responsibility for allowing testing of the cars on their roads, Uber needs (imo) to be able to prove the bug that caused the accident was so much of an edge case that they couldn't easily have been able to foresee it.

Are they even testing this shit on private tracks as much as possible before releasing anything on public roads? How much are they ensuring a human driver is paying attention?


Hm, people are fine with folks getting mauled in an industrial accident, or killed misusing power equipment. So its not purely a machine thing.

Maybe because its unexpected - the victim is not involved until they are dead?


The innocent bystander is indeed the reason. People working with machinery accept certain risks.


Pedestrians and cyclists ( and car occupants for that matter) accept risks near or on roads. You expect that at least 1. drivers are vigilant and cars are maintained so the 2. brakes don't fail or 3. wheels don't fall off. Yet in my life I have eyewitnessed all 3 of those assumptions being wrong at least once.


I'd be surprised if you could educate this problem away just by publishing statistics. Generally, people don't seem to integrate statistics well on an emotional level, but do make decisions based on emotional considerations.

I mean, people play the lottery. That's a guaranteed loss, statistically speaking. In fact, it's my understanding that, where I live, you're more likely to get hit by a (human-operated) car on your way to get your lottery ticket than you are to win any significant amount of money. But still people brave death for a barely-existent chance at winning money!


> These are "normal" reasons for bad accidents and we can not only rationalize them, but also rationalize how it wouldn't happen to us: "I don't drive near colleges where young kids are likely to drive drunk", "I don't text (much) while I drive", "I pay attention".

Tangent: is there a land vehicle designed for redundant control, the way planes are? I've always wondered how many accidents would have been prevented if there were classes of vehicles (e.g. large trucks) that required two drivers, where control could be transferred (either by push or pull) between the "pilot" and "copilot" of the vehicle. Like a driving-school car, but where both drivers are assumed equally fallible.


Pilots don't share control of an aircraft; the copilot may help with some tasks, but unless the captain relinquishes control of the yoke (etc.) he's flying it. So you'd still have issues where a cars "pilot" gets distracted, or makes a poor decision.


It's even more complex - there's the temporary designation "pilot flying" and "pilot not flying", with a handover protocol and whatnot: https://aviation.stackexchange.com/questions/5078/how-is-air...


> This is what's going to happen. If you've ever seen a machine learning algorithm in action, this isn't surprising at all. Basically, they'll behave as expected some well known percentage of the time. But when they don't, the result will not be just a slight deviation from the normal algorithm, but a very unexpected one.

Do we even know yet what's happened?

It seems rather in bad taste to take someones death, not know the circumstances then wax lyrical about how it matches what you'd expect.


May be for now, autonomous driving should be limited to freeways and roads that dont have pedestrian crossings/pavements.


This is a great point. Solve it one step at a time.

But the problem is Uber's business plan is to replace drivers with autonomous vehicles ferrying passengers. i.e. take the driver cost out of the equation. Same goes for Waymo and others trying to enter/play in this game. It's always about monetization which kills/slows innovation.

Just highway-mode is not going to make a lot of money except in the trucking business and I bet they will succeed soon enough and reduce transportation costs. But passenger vehicles, not so much. May help in reducing fatigue related accidents but not a money making business for a multi-billion dollar company.

That being said, really sad for the victim in this incident.


From what I could tell, it never was a money making business. Uber now and forever, operates at a loss.


Another quirk of people, particularly acting via "People in Positions of Authority" is that they will need to do something to prevent next time.

Why did this happen? What steps have we make sure it will never happen again? These are both methods of analysing & fixing problems and methods of preserving a decision making authority. Sometimes this degrades into a cynical "something must be done" for the sake of doing, but... it's not all (or even mostly) cynical. It just feels wrong going forward without correction, and we won't tolerate this from our decision makers. EVen if we will, they will assume (out of habit) that we won't

We can't know how this happened. There is nothing to do. ..and.. this will happen again, but at a rate lower than human driver's more less opaque accidents.... I'm not sure how that works as an alternative finding out what went wrong and doing something.

Your comment is easily translated into "you knew there was a glitch in the software, but you let this happen anyway." Something will need to be done.


Even if we assume that we wanted to address this for real, I fear that it will be next to impossible to actually assess whether whatever mistake caused this has actually been addressed when all the technology behind it is proprietary. I can easily see people being swayed by a well-written PR speech about how "human safety" is their "top priority" without anything substantial actually being done behind the scenes.

I think any attempts to address such issues have to come with far-ranging transparency regulations on companies, possibly including open-sourcing (most of) their code. I don't think regulatory agencies alone would have the right incentives to actually check up on this properly.


It‘s amazing how quickly things can happen after an accident.

In a nearby town, people have petitioned for a speed limit for a long time. Nothing happened until a 6 year old boy was killed. Within a few weeks a speed limit was in place.


Safety rules are written in blood. Often someone has to die before action is taken. But eventually people forget, get sloppy, or consider the rules completely useless. See: nightclub fires.


> So we will have overall a much smaller number of deaths caused by self driving cars, but ones that do happen will be completely unexpected and scary and shitty. You can't really get away from this without putting these cars on rails.

One of the big questions I have about autonomous driving is if it's really a better solution to the problems it's meant to solve than more public transportation.


Do you have any experience developing autonomous driving algorithms? Because you are making a lot of broad claims about their characteristics that only someone with a fairly deep level of expertise could speculate about.


Interesting insight. While self-driving cars should reduce the number of accidents, there are going to be some subset of people who are excellent drivers for which self-driving cars will increase their accident rates (for example, the kind of person who stays home when its icy, but decides that their new self-driving car can cope with the conditions).


Your comment reminded me of the One Pixel Attack [1] and my joke about wearing a giant yellow square costume...

[1] https://github.com/Hyperparticle/one-pixel-attack-keras


> Moreover, the human brain won't like processing these freak accidents.

I think this is really key. The ability to put the blame on something tangible, like the mistakes of another person, somehow allows for more closure than if it was a random technical failure.


Very well put.

It boggles my mind that a forum full of computer programmers can look at autonomous cars and think "this is a good idea".

They are either delusional and think their code is a gift to humanity or they haven't put much thought into it.


I don't believe I'm delusional, my code is certainly medium-to-okay. I've put a lot of thought into this. I think autonomous cars are a very good idea, I want to work on building them and I want to own one as soon as safely possible.

Autonomous cars, as they exist right now, are not up to the task at hand.

That's why they should still have safety drivers and other safeguards in place. I don't know enough to understand their reasoning, but I was very surprised when Waymo removed safety drivers in some cases. This accident is doubly surprising, since there WAS a safety driver in the car in this case. I'll be interested to see the analysis of what happened and what failures occurred to let this happen.

Saying that future accidents will be "unexpected" and therefore scary is FUD in its purest form, fear based on uncertainty and doubt. It will be very clear exactly what happened and what the failure case was. Even as the parent stated, "it saw a person with stripes and thought they were road" - that's incredibly stupid, but very simple and explainable. It will also be explainable (and expect-able) the other failures that had to occur for that failure to cause a death.

What set of systems (multiple cameras, LIDAR, RADAR, accelerometers, maps, GPS, etc.) had to fail in what combined way for such a failure? Which one of N different individual failures could have prevented the entire failure cascade? What change needs to take place to prevent future failures of this sort - even down to equally stupid reactions to failure as "ban striped clothing"? Obviously any changes should take place in the car itself, either via software or hardware modifications, or operational changes i.e. maximum speed, minimum tolerances / safe zones, even physical modifications to configuration of redundant systems. After that should any laws or norms be changed, should roads be designed with better marking or wider lanes? Should humans have to press a button to continue driving when stopped at a crosswalk, even if they don't have to otherwise operate the car?

Lots of people have put a lot of thought into these scenarios. There is even an entire discipline around these questions and answers, functional safety. There's no one answer, but autonomy engineers are not unthinking and delusional.


We look at the alternative which is our co-workers, and people giving us our specs, and our marketing teams and think 'putting these people in charge of a large metal box travelling at 100kmh interacting with people just like them - that is a good idea'...

It is not that we think that software is particularly good, it is that we have a VERY dim view of humanity's ability to do better.


    I won't be surprised if someone at some point wears a stripy outfit, and the car thinks they are a part of the road

Shouldn't be an issue with 3D cameras


You can't prove a negative and should be careful about promising what may turn out to be false. There is potentially quite a bit of money to be made by people with the auto version of slipping on a pickle jar. When there is money to be made, talented but otherwise misguided people apply their efforts.


Well, in that case a non-textured or reflective clothing could have a similar effect.


Wouldn't LIDAR pick it up still?


Imagine someone carrying a couple of two by fours they are holding vertically. They then stop on the sidewalk to check their phone at just the right angle. I am not really giving specific examples, as much as trying to illustrate that the way ML systems fail isn't by being slightly off from the intended programming, but by being really off.


To put this accident in perspective, Uber self-driving cars totaled about 2 to 3 millions miles, while the fatality rate on US roads is approximately 1.18 deaths per 100 millions miles [1].

[1] https://www.nhtsa.gov/press-releases/usdot-releases-2016-fat...


The appropriate comparison would be to ask how many pedestrians were struck by a car and killed.

Considering that humans would likely slow down if they see a pedestrian - even if one appeared suddenly - this is even more disconcerting.


It's more specific, but I care about all fatalities, not just car-pedestrian fatalities.


At some point, autonomous vehicles will be carrying cargo instead of passengers. When that happens, fatalities per mile driven will no longer be a valid comparison between manned and unmanned vehicles, as catastrophic accidents will have fewer people involved.


I do think it'll always be fair to consider total lives lost per usage metric, regardless of if people were in the vehicle or not. The lives of drivers & passengers have equal value to the lives of pedestrians.


You can still tally fatalities per million ton miles of cargo. As I'm sure people do already for trucks vs trains etc.


This Rand study looks at impact of lives saved and recommends that "Results Suggest That More Lives Will Be Saved the Sooner HAVs Are Deployed". Any mishap, while most unfortunate and tragic for everyone concerned, should not result in kneejerk reactions!

https://www.rand.org/pubs/research_reports/RR2150.html


That’s not apples to apples. How many cars account for the nhtsa number and how many for Uber’s program?


I'd say that miles driven is a good normalizer.

In fact, NHTSA statistics includes miles driven under adverse conditions (rain, snow, etc) while I'd bet that this is not the case for Uber.


Statistically, I don't think that miles driven on lonely freeways equal those in Manhatten. And perceptually, running down city pedestrians at 40 MPH will impact public thinking more than highway fatalities.


True.


Knowing that makes this much more infuriating.


Waymo has driven 2.7 billion miles in simulation [1]. Is that enough to give them some idea of how many deaths to expect?

[1] https://waymo.com/ontheroad/


It's impossible to know. The simulation needs to be very good with thousands and thousands of all the weird situations that can occur in the real world. Without knowing how sofisticated the simulation is and if they also are using generative algorithms to try to break the autonomous system you can't even ballpark it.


that doesn't sound like a fair comparison. How many human interventions were there in the 2-3 million Uber miles?


The people leading the development should demonstrate that they can stop for pedestrians by personally jumping out in front of them on a closed test road. If they're not able to demonstrate this, they shouldn't be putting them on public roads.


Self driving cars are still subject to the laws of physics... unless you're going to dictate that self-driving cars never go above 15mph, I wouldn't advocate jumping in front of even a "perfect" self-driving car.

Braking distance (without including any decision time) for a 15mph car is 11 ft, for a 30mph is 45 ft. Self driving cars won't change these limits. (well, they may be a little better than humans at maximizing braking power through threshold braking on all 4 wheels, but it won't be dramatically different)

So even with perfect reaction times, it will still be possible for a self-driving car to hit a human who enters its path unexpectedly.


Once upon a time when I was learning to drive, one of the exercises my instructor used was to put me in the passenger seat while he drove, and have me try to point out every person or vehicle capable of entering the lane he was driving in, as soon as I became aware of them. Every parked vehicle along the side of a road. Every vehicle approaching or waiting to enter an intersection. Every pedestrian standing by a crosswalk or even walking along the sidewalk adjacent to the traffic lane. Every bicycle. Every vehicle traveling the opposite direction on streets without a hard median. And every time I missed one, he would point and say "what about that car over there?" or "what about that person on the sidewalk?" He made me do this until I didn't miss any.

And then he started me on watching for suspicious gaps in the parked cards along the side that could indicate a loading bay or a driveway or an alley or a hidden intersection. And so on though multiple categories of collision hazards, and then verbally indicating them to him while I was driving.

And the reason for that exercise was to drive home the point that if there's a vehicle or a person that could get into my lane, it's my job as a defensive driver to be aware of that and be ready to react. Which includes making sure I could stop or avoid in time if I needed to.

I don't know how driving is taught now, but I would hope a self-driving system could at the very least match what my human driving instructor was capable of.


Sounds like you had a great driving instructor. Although I never had an experience like that when learning to drive, we did have to complete something in the UK called a "hazard perception test"[1] in order to get a drivers license. Basically a video version of what your instructor did for you. Until reading your comment today, I'd never really put much thought into how useful this is and how ingrained in my everyday driving it is.

[1] https://www.youtube.com/watch?v=SdQRkmdhwJs


A thing I like to do to stay in practice is to browse /r/roadcam over on reddit. I open a video at random and watch it, and try to guess where the collision (or near-collision) is going to come from.


That sounds like a great exercise, I wish it was standard. But I'm guessing you must have taken this course in a rural area, or else be able to talk faster than an auction caller. I don't think I could list off all the hazards in an urban area fast enough. :)


Sounds like an exceptional driving instructor to me. Exceptionally good.


Indeed. This is why many cities are reducing speed limits.

In fact, self-driving cars may actually improve the situation if cars actually start complying with speed limits en masse.


Good point -- there's a 4% chance of fatality when struck by a 15mph car versus 20% at 30mph.

There's a 30mph city street near me where cars routinely go 45mph -- the fatality rate jumps up to 60% at that speed. So just having cars follow the speed limit would go a long way toward reducing fatalities.

https://www.propublica.org/article/unsafe-at-many-speeds


I'm greatly in favor of a slower-moving but more efficient automotive network. How many human inconveniences are caused by what boils down to impatience? See: gridlock, people entering intersections they cannot leave, and jamitons.

(IMHO) jamitons would dissolve if people would leave a flexible buffer between them and the car in front of them and focusing on minimizing braking, rather than driving up to their bumper, brake, wait for moving, accelerate, brake, repeat. The lag due to reaction time and ac/deceleration exacerbates the "viscosity" of traffic flow. If most drivers focused on "staying fluid" rather than hurry-up-and-wait, traffic ought to improve. Like fluidized beds.

https://math.mit.edu/projects/traffic/


Indeed what? It's obvious that slower cars will kill fewer people. That's meaningless without saying what the cost of driving slower is.


I'd like to think that the economic cost of regularly killing people on the streets is higher than getting to a place ten minutes faster. American traffic fatalities per year are basically equivalent to killing off a large town.

No one is saying reduce speeds everywhere. But in an urban context with lots of pedestrians, these speeds matter, and urban traffic is generally so stop-and-go and congested that drivers rarely sustain the top speed, and reducing it doesn't actually affect travel time by all that much.


"I'd like to think that the economic cost of regularly killing people on the streets is higher than getting to a place ten minutes faster."

However comforting that logic may be, it is unusable in the real world. If you value lives infinitely, then you will never ever do anything that risks your or somebody else's life in order to gain on any other need. You routinely engage in things that are not the safest possible option in order to fulfill other needs, ranging from food or water acquisition through mere entertainment. Therefore you place a finite value on your life. Don't feel bad, so does everybody else. It is possible to determine the value placed on life, I believe there are studies that show the value is more stable than you might think, and balance things appropriately.

It may be uncomfortable thinking, but, again, unless you literally never take even the smallest risk in the pursuit of other goals, you are already thinking this way. You just haven't lifted it up to the conscious level yet.


No one is asking for infinite value. A cursory glance at causes of death rates in USA reveals that far too many people are dying in automobile collisions. There is nothing about our world that requires that level of carnage. Future generations will find our customs ghastly.


As in the famous Churchhill quote, once you agree it's not infinite, now we're just dickering about price.

I'd say you're almost right about nobody asking for infinite value, but I'd say it's more like nobody who has pulled this up to the conscious level is asking for infinite value. People who have not examined the belief are quite prone to speaking as if life's value is infinite... but their own actions inevitably belie that claim. Once examined, it becomes rationally obvious that life is not infinitely valuable (including your own), but, well, if humans automatically accepted and believed all rational things they examine without emotional consequence the world would be a very different place.


Agreed, your parent post was strawmanning. Is there a name for this second-order sort of meta-strawmanning, in which we imagine people's unconscious inclinations, rather than merely imagining their arguments?


The United States government led by the Obama administration came up with values from 6-9 million dollars when weighing marginal costs of safety regulations.

http://www.nytimes.com/2011/02/17/business/economy/17regulat...


>It is possible to determine the value placed on life, I believe there are studies that show the value is more stable than you might think, and balance things appropriately.

For evaluating safety regulations relative to the cost of inplementation, NHTSA values the risk of loss of human life in relation to the market value of risk reducing products, and the safety they provide.

They extrapolate from the take rate of airbags and their cost and effectivity, to how much value the average American places on their own life. IIRC on the order of $5 million.

FYI, juries do not look kindly on companies that implement this in liability suits. The data is that if a corporation writes down a $-figure for a (statistical) human life, it anchors punitive fines at a higher level.


Well, you do waste minutes of peoples lives. Let's see: 100 mio people driving 250 days/year, losing 10 min each way (so 20/day). That's about 12k lives of 80 years. It's actually worse, because you are wasting "awake time", so add 30%. It seems America isn't /that/ far of from the optimum. Maybe better driver education or better roads would be more effective?


Regardless of what you'd like to think, actual costs and benefits of driving speeds would make a more compelling argument.


The real issue with driving slower is enforcement. Enough people don't take into account the speed limit that changing the speed limit without changing the roads leads to unsafe mixed speeds.

Universally adhered to lower speed limits in urban environments would be great.


Certainty of enforcement. People are careful to obey a rule punished 100% of the time with a $1 fine. They brush off a 10^-5 probability of a $100,000 loss by thinking "It won't happen to me."


Correct. I often walk down a road with a 20mph 'limit' where most cars are doing around 40 - and some appear to be doing more like 50. There's simply no economically viable way to enforce it, so it will continue like this until there have been a couple of fatalities.


There's simply no economically viable way to enforce it

Conduent offers a turnkey solution for this. They provide and manage speed cameras: https://www.conduent.com/solution/transportation-solutions/r...


Not a chance. And that is a good thing. This knee jerk towards "lets just monitor everyone, everywhere and automate the law" is antithetical to a free society. Most people know that, which is why speed camera votes always send that company (RedFlex or whoever) packing.


I don't know about the US, but where I live speed camera are relatively large bright colored boxes with reflective stripes on the side of the road, with mandatory "speed camera ahead" warnings.

Most of them are empty but people unfamliar with the place will usually slow down.


The laws are state and local. In AZ it's a city or township, then the people put it on the ballot and it gets shut down. Tucson voted 65% no cameras, and later we got a state wide ban on highways, so it's a still a work in progress. The tickets are civil law, so you can throw it out, frame it, or make a coffee table book if you get enough:)


You can lower speeds quite drastically with better street design, no enforcement needed. Make the road narrower and curvier instead of wide open and straight. You can even add bumps.


You'll have to leave earlier for your important appointment!


I suppose you're suggesting there's a distinction between someone losing x hours of life due to travel time vs. someone being killed and losing y hours of life. I'm sure you can see why someone attempting to create a reasonable policy might avoid making that distinction.


I thought you were asking about "the cost of driving slower"? I'm perfectly serious in my answer. Any transportation goal (with the possible exception of ambulance service?) that may be accomplished at high speed, also can be accomplished at lower speed, with sufficient planning.


>also can be accomplished at lower speed

But what about an even lower speed? If 15 mph is good, then 5 mph is better. And if 5 mph is better, 1 mph is superior once more.

I think we can agree there is a point where slow becomes too slow and the 'sufficient planning' becomes an unreasonable burden. So given we aren't operating off the notion that slower is inherently better, then there is some equation giving us our optimal point. What if that point is 45 mph instead of 15 mph?

In short, how do we argue that 15 mph is better than 45 mph that can't also be applied to speeds lower than 15 mph?


It's easy - you have diminishing improvements in pedestrian survival rates. 40MPH+ is associated with a fatality rate of over 50%, whereas you get to 30MPH and you have 7%, and 20MPH is essentially zero: https://nacto.org/docs/usdg/relationship_between_speed_risk_...


Even if it is close to zero, how do we decide if the lives saved from going from a .1% fatality rate to a .09% fatality rate is worth the speed reduction or not?


jerf addressed this above. What you're saying isn't accurate. Doing things more quickly avoids wasting hours of life. That's a benefit. The cost is measured in hours of life of people killed and injured as a result of doing things at a chosen speed.


What activity are you talking about, that is feasible at 40mph in an urban environment but not at e.g. 25mph? Can you not imagine a different way of conducting that activity?

There is a fundamental inequity between the operator of dangerous equipment comparing hours and years of life, and the pedestrian who suffers the consequence of that comparison. The USA auto industry is built on this inequity, which is why no one ever talks about it.


Any activity which requires things to be moved. Meeting a friend, making a delivery,

Lives, dollars, wasted time are all fungible.

Your new, inequity point adds externalities to the discussion. That's fine, but those are also measured in lives, dollars, wasted time.


Many people meet friends and make deliveries e.g. via bicycle. Those motorized vehicles that are set aside specifically for deliveries often travel more slowly than other motorized vehicles.

But I shouldn't pick nits; you were speaking in generalities! So when I said "any transportation goal" and you disagreed, you hadn't actually thought of a particular exception to my universal statement. And you still haven't thought of one. Are your contributions to this discussion offered in good faith?


1. I want to visit a friend. He lives an hour away by car. New speed limit changes it to two hours. I lose an hour of life in the car.

2. Auto plant needs radiators. Truck delivering them takes longer to get them there. Costs $x more. Car costs more. Car buyer has to work longer to buy car. Car buyer wastes y hours of life.


Or maybe you don't visit your friend so often. Or you decide that 90 minutes on the train is better than driving, or maybe your friend gets tired of making a 2 hour trip to the city, so he moves closer.

There are lots of alternatives that don't involve you spending more time driving.

While it's true that goods will cost more to transport if it takes longer, that higher charge is amortized across many products in the truck, so is a very small portion of the finished product.

So if a radiator fits in a box 16x24x6" or 1.3ft^3 and a 40ft truck holds 2400 ft^3 (subtract 20% since it won't be a perfect fix, so 2000 ft^3, so you can fit 1500 of them in the truck.

If a truck+driver costs $100/hour, that means each radiator will cost 13 cents more.

Or, another way at looking at it -- all of the parts that make up a car aren't going to be bigger than a car (sure, some space is lost to packaging, but there's a lot of empty space in a car), and 6 - 10 cars can fit on a car carrier truck, so each car will end up costing around $25 more.

Though since we're talking about urban speed limits, and there aren't many urban car manufacturers, slow urban speed limits won't affect the price of cars.


I suppose this subthread is complete, because you have now completely agreed with my original statement: you'll have to leave earlier for your trip to visit your friend, and delivery trucks will have to allocate more time (routes, trucks, drivers, etc.) for their deliveries. Alternatively, freight trucks might make fewer mostly-empty trips. Note that these two examples clearly match the characterization I provided: both may be accomplished at lower speed, with sufficient planning. This will be the "cost" of safe driving.


You don't seem to understand that leaving earlier to do something is different than leaving later to do the thing.


And not even necessarily that.

Often speeding just gets you to the next intersection, or red lighter, quicker, where you then have to wait longer for traffic, or a green light.


How often? How often you'd manage to go through the intersection before the lights change, and gain even more?

(edit: just to be clear, I'm not in support of speeding, however I do value good logic.)


I can't find it, but I remember reading that an aggressive driver saves on average 20 seconds on what is on average a 10 minute trip.


What cost do you put on your own life? Lets start from there?


>Indeed. This is why many cities are reducing speed limits. In fact, self-driving cars may actually improve the situation if cars actually start complying with speed limits en masse.

The vast majority of people just go however fast they feel comfortable (considering conditions, etc) regardless of the speed limit.

Mixed traffic speeds decrease safety.

Raising speed limits so that you don't have the % of people who comply with the letter of the law traveling slower than the people who go however fast they're comfortable usually improves safety.

Unless your goal is to increase ticket revenue or appease the "think of the children crowd" there's no point to lowering speed limits. It doesn't do much to affect traffic speed. To do that you have to modify the road or do something to change the traffic flow.

Self driving cars will improve safety because they'll result in political pressure to raise speed limits to match reality and they'll make dynamic speed limits more practical.


>Mixed traffic speeds decrease safety.

Sure, but city streets are already mixed traffic. There are pedestrians, bikes, vehicles parking or turning, etc. It's not reasonable to raise the limit to what people want to drive and just ignore all the other users of the street.

Also, the optical narrowing mentioned in a sibling comment is quite effective. They've done that on a few streets near me via things like sidewalk bulb-outs at intersections, and swapping the parking lane & bike lane (so it goes curb-bike-parking-drive, rather than curb-parking-bike-drive). Everyone drives more slowly on those streets now - myself included.


It does work when the roads are designed properly. No, posting a random speed sign isn't going to slow down traffic. But speed bumps, chicanes, turns, smaller lanes, etc. will naturally slow down traffic, making it impossible to drive dangerously fast.

For more, see https://www.vox.com/the-big-idea/2016/11/30/13784520/roads-d...


This is why a lot of people advocate 'optical narrowing' and other methods. These are supposed to make people actually want to go the speed limit.

The trick is to make people feel like driving fast is unsafe, without actually making driving any unsafer.


"Mixed traffic speeds" is a concept for 4-lane highways, not 1 or 2 lane city streets. It would not take many law abiding vehicles to bring a one or two lane road down to the speed limit.


> Mixed traffic speeds decrease safety

Exactly. That is why we should ban all cars anywhere there is people, and they should be limited to highways. Because safety is most important, right? Right???


> Unless your goal is to increase ticket revenue or appease the "think of the children crowd" there's no point to lowering speed limits. It doesn't do much to affect traffic speed. To do that you have to modify the road or do something to change the traffic flow.

It’s very simple, put speed cameras on every corner, and fine in terms of day wages. E.g., one week of your income as fine for going x% above.

Several countries are doing parts of this already, or moving towards it.


Braking distance ... for a 30mph is 45 ft

Apologies for going off topic here, but I'm curious about this. I've tested every car I've ever owned and all of the recent cars with all-round disc brakes have outperformed this statistic, but I've never been able to get agreement from other people (unless I demonstrate it to them in person).

I'm talking about optimal conditions here, wet roads would change things obviously but each of these cars was able to stop within it's own car length (around 15 feet) from 30mph, simply by stamping on the brake pedal with maximum force, triggering the ABS until the car stops:

2001 Nissan Primera SE

2003 BMW 325i Touring (E46)

2007 Peugeot 307 1.6 S

2011 Ford S-Max

I can't work out how any modern car, even in the wet, could need 45 feet to stop. In case it's not obvious, this is only considering mechanical stopping distance, human reaction time (or indeed computer reaction time which is the main topic here) would extend this distance, but the usual 45 feet from 30mph statistic doesn't include reaction time either.


I knew I'd get called out for not including sources. Those figures are from published sources, and do not include decision time. I'd imagine that these sources are for "average" roads and "average" cars

http://www.government-fleet.com/content/driver-care-know-you...

http://www.brake.org.uk/facts-resources/15-facts/1255-speed

Car and Driver did a test with sports cars and profesional drivers and came up with 142 - 155 ft from 70mph, while my first reference quotes 245 ft (around 40% less, so extrapolating, their stopping distance from 35mph would be around 20 feet).

https://www.caranddriver.com/features/rocket-sleds-the-best-...

The average car on the road is not a sports car with performance tires and is not stopping on a clean, dry track. So I don't think it's a stretch to assume that an average car on average roads with tires optimized for tread life would be 40% worse than a $100K sports car with $400 tires that are optimized for grip rather than lifetime.


I wasn't aiming to call you out, as such. I just wanted to air my opinion, but thank you none the less for providing some sources :-)

The 45 feet from 30mph is a common figure (the UK government's Highway Code uses it as well).

The cars I tested are normal cars, but I concede I had good tyres and I always choose smooth roads to test on. Once you include:

1. Driver ability

2. Vehicle quality

3. Road quality

4. Prevailing conditions

5. 4 passengers plus luggage

I guess you can explain the difference.


In addition, some cars such as the Nissan Leaf have a feature that locks the brakes at full power when there is a sudden control shift from accelerator to brakes, meaning that if you stomp on the brakes and then reduce pressure, the car will continue to brake at maximum power. This was done for two reasons: one, people hesitating in emergency situations, and two, people being taught to pump brakes, which increases braking distance in cars with ABS.

I learned about this feature when reading car forums for my car and finding threads from people who were rear-ended when they accidentally triggered this feature by slamming on the brakes when they didn't intend to come to a complete stop.


> In addition, some cars such as the Nissan Leaf have a feature that locks the brakes at full power when there is a sudden control shift from accelerator to brakes, meaning that if you stomp on the brakes and then reduce pressure, the car will continue to brake at maximum power.

Which is utterly stupid. Braking is the natural reflex, but not always the right one. I've been in more than one close call (think left turn on incoming traffic) where the correct response was not to floor the brake, but floor the gas.

> people being taught to pump brakes, which increases braking distance in cars with ABS.

Which is a mechanical turk version of what the ABS is doing under the hood.


That seems deeply unexpected. Was there any resolution or response from Nissan in any of the cases you read about?


> I'm talking about optimal conditions here, wet roads would change things obviously but each of these cars was able to stop within it's own car length (around 15 feet) from 30mph, simply by stamping on the brake pedal with maximum force, triggering the ABS until the car stops

How did you measure that? Because plugging these figures into a uniform acceleration calculator, 50km/h to 0 in 4.47m requires a deceleration of 2.2g but "Analysis of emergency braking of a vehicle"[0] experimentally measured very best case deceleration as barely scraping 1g (with ABS at 80km/h, significantly lower at lower speeds or without ABS).

[0] https://www.tandfonline.com/doi/pdf/10.1080/16484142.2007.96...


So, there's a performance envelope expected. Sure, someone can bungee jump off an overpass and not be avoidable. :)

But they should be willing to walk in front of it in an in-spec performance regime. There's some really good Volvo commercials along that line, with engineers standing in front of a truck.


Unlike humans who have limited vision, self-driving cars are generally able to observe all obstacles in all directions and compute, in real-time, the probability of a collision.

If a car can't observe any potential hazards that might impact it using different threat models it should drive more slowly. Blowing down a narrow street with parked cars on both sides at precisely the speed limit is not a good plan.


You could do that, but how far do you take it? Do you program a car to slow to 10mph every time it passes a parked delivery van just in case? Would people find that acceptable?


Yes? If self driving is being hailed as being safer, it, you know, should be. If that means doing all the boring stuff I would not bother to, so be it. How else will they be safer unless by ignoring our driving biases?


The speed limit for this particular stretch was 45 mph. (sounds high to me, considering there is a bike lane)


Phoenix transportation infrastructure is notoriously bad at handling pedestrians and bicycles because so few people are going to be out biking in 120F summers. There's a knock off effect of exclusively designing a city around middle class car and house types of folks. It's why I left after 20 years in Phoenix and Tempe.


In Atlanta, bicycles are cars share the road (with or without a bike lane) at 45mph speed limits.


If self driving cars are limited to speeds that allow them to stop within their lidar max range is that too slow? Humans don't have the pinpoint accuracy of lidar but our visual algorithms are very flexible and robust and also have very strong confidence signals e.g driving more carefully in dark rain.

Cameras are not accurate enough though, their dynamic range being terrible. Wonder how humans would fare if forced to wear goggles that approximated a lidar sensors information.


The advantage self-drivers have is in 1. minimizing distraction / optimizing perception 2. minimizing reaction time.

Theoretically self-drivers will always see everything that is relevant, unlike a human driver. And theoretically a robot-driver will always react more quickly than even a hyper-attentive human driver, who has to move meat in order to apply the brake.


But is that the actual situation we're talking about? Or are we actually talking about a situation where the person may have been jaywalking but would have had a reasonable expectation that a human driver would stop? I walk a decent distance to work every day and I don't think anyone totally adheres to the lights and crosswalks (not least because if you do you will be running into the road just as everyone races to make a right turn into the same crosswalk you're in).


What about steering away, a la ABS?


Then we should at least learn their capabilities by throwing jumping dummies at them. Call it the new dummy testing. It's the least we can do. Did Travis bro do this when he set the plan in motion?


We don't throw jumping dummies in front of human students before we let them drive; why should machines be different?


Maybe not in the U.S., but in Norway, as part of getting the driver's license, you'll have to take an ice driving course, where, amongst other things, a dummy is swung at your car while driving on an oiled lane in 30 mph. (It is practically impossible to steer away from the dummy, which is the meaning of the exercise; to realize the folly of driving too fast on icy roads)


Because we have a pretty good idea of how humans react in traffic, but we're still a bit unsure about robots.


Because it's assumed human drivers will at least have a desire to stop. If a machine isn't properly programmed, it will plow right through a crowd and never look back.


There is no inherent equality between humans and machines. Is it conceptually difficult to grasp that machines can be held to a different standard?


The ancient Romans would have the civil engineer stand under the bridge they'd just built while a Legion marched over it. That's why Roman structures are still around today!


>The ancient Romans would have the civil engineer stand under the bridge they'd just built while a Legion marched over it. That's why Roman structures are still around today!

JFYI, it is most probably apocryphal: https://skeptics.stackexchange.com/questions/18558/were-roma...

still it's a nice one, I have heard the same about engineers/architects in ancient Babylon and Egypt.


Can't speak for the romans, but his seems legit enough, and predates the romans by quite a bit:

Hammurabi's code:

> If a builder builds a house for a man and does not make its construction firm, and the house which he has built collapses and causes the death of the owner of the house, that builder shall be put to death.

> If it causes the death of the son of the owner of the house, they shall put to death a son of that builder.

https://www.fs.blog/2017/11/hammurabis-code/

Heard about it on Econtalk's recent episode with Nassim Taleb.


A more verifiable story in a similar vein would be Frank Lloyd Wright standing under an exemplar of his "dendriform" columns, developed for the Johnson Wax Headquarters in Racine, laughing and whacking it with a cane as it was undergoing a loading test for the building code enforcement officers.


That was one of Wright's famous leaky-roof buildings.

http://journaltimes.com/news/local/confronting-a-legacy-of-a...


I'm not saying he was good, just self-confident.


A similar story comes from World War II where an alarmingly high number of parachutes were failing to open when deployed. They started picking random chutes and their packer and sent them up for a test drop. The malfunction rate dropped to near zero.


Heard a similar myth about a Danish king. We was tired of cannons blowing up so he ordered the manufacturer to sit on top of the cannon when it was fired the first time.


Something similar, but self-imposed happened when they moved a whole building in Guadalajara: https://en.wikipedia.org/wiki/Jorge_Matute_Remus#Movement_of...

"In order to gain the trust of the employees he asked his wife to enter the building while the movement was taking place."


For anyone who is curious like I was, there's a related episode on 99percent invisible that links to a YouTube documentary, although it's in spanish:

https://99percentinvisible.org/episode/managed-retreat/2/


The comparison is odd to me. Somehow building bridges seems more of an exact science to me than making cars drive themselves. I sure wouldn't step on a bridge if its engineer doesn't dare going under it. Shit is supposed to stand up.


And yet didn't one collapse in Florida just the other day?


Isn't that just survival bias? The good ones didn't collapse?


Do you know if this is factual? I've read that it's only a myth.


May have been allegorical to describe what would happen to engineers if their bridges failed.


Would they do it enough times for such observation to have any statistical significance?


That misses the point. If you designed a bridge, and you died if it failed in one trial, how would this affect your design process?


I believe this method falls strictly under deterrence.

It is not possible to extract more punishment on an individual than their death, technically, unless you believe in an afterlife.

It is not punitive and meant to 'correct' the builder; it is meant to 'prevent' the builder from cutting corners.


"It is not possible to extract more punishment on an individual than their death, technically, unless you believe in an afterlife."

Not true at all. Not that I'd recommend it, but there's torture, and punishing family. They were far less outrageous in Ancient times.


Likely they were not familiar with the scientific method and statistics as we know it today.


"The people leading the development should demonstrate that they can stop for pedestrians by personally jumping out in front of them on a closed test road. If they're not able to demonstrate this, they shouldn't be putting them on public roads."

Although the actual logistics of your proposal might be challenging (child comments point out that some speeds/distances might be impossible to solve) your instinct is a correct one: the people designing and deploying these solutions need to have skin in the game.

I don't think truly autonomous cars are possible to deploy safely with our current level of technology but if I did ... I would want to see their family and their children driving in, and walking around, these cars before we consider wide adoption.


My opinion is that we will come to a point where self-driving cars are demonstrably, but only marginally, safer road users than humans.

From an ethical standpoint the interesting phase will only start then. It‘s one thing to bring a small fleet of high tech (e.g. having LIDAR) vehicles to the road. It‘s another to bring that technology to a saturated mass market which is primarily cost driven. Yes, I assume self-driving cars will eventually compete with human driven ones.

Will we, as a society, accept some increase in traffic fatalities in return for considerable savings that self-driving cars will bring?

Will you or me as an individual accept a slightly higher risk in exchange for tangible time savings?


> demonstrably, but only marginally, safer road users than humans.

I believe the claim is 38 multitudes better than humans, significantly better than marginally.

> accept some increase in traffic fatalities

No. And the question is more about "some" than "some increase"

> accept a slightly higher risk in exchange for tangible time savings?

Texting while driving and even hands free talking were becoming laws in many states before smart phones -- and my experience is that many people readily accept this risk and the legal risk just to communicate faster. The same can be said for the risk of drunk driving -- it's a risk that thousands of Americans take all of the time.


We do all the time, and the answer to your question is, yes if marketed ingeniously. Humans are stupid.


Indeed, history gives us ample examples, but I would‘t call humanity stupid because of that. Each and every such example is a ethical question that we have to solve one way or the other. Burying our heads in the sand will not make technologically and expensive self-driving vehicles go away. Neither will it prevent cheap self-driving clunker.


This isn't a good argument because it implies if these AVs have successfully not killed their CEOs during a closed (i.e. controlled) test, that they are safe on public roads. But it seems like the majority of AV accidents so far involve unpredictable and uncontrolled conditions.

IOW, setting this up as some kind of quality standard gives unjustified cover ("Hey, our own CEO risked his life to prove the car was safe!") if AVs fail on the open road, because the requirements of open and closed tests are so different.


IIRC the people who programmed the first auto pilot for a major airliner were required to be on board the first test flight, so I have to think their testing methodology was pretty meticulous.


Auto Pilot is not a "Push a button and it goes from Airport A to Airport B". It only helps in a few cases.


It wasn't but it increasingly is getting to that point. You take off (with fly by wire adjustments from the computer) from Airport A and tell it to fly to Airport B and it can even automatically land there.

Some airlines do not allow pilots to manually fly above 3k feet, nor allow first mates to land manually[0].

0: https://www.wired.com/story/boeing-autonomous-plane-autopilo...


Which then gets you pilots frobbing the autoland selector, not getting the mode right - and now what, if you have zero practice, having scripted yourself out of it? Now you fly the plane into the runway: https://en.wikipedia.org/wiki/Asiana_Airlines_Flight_214

There is a (prevalent) happy path where autopilot and autoland can get you the whole illusion "but the plane flies itself" - but the whole system is built around humans catching and handling all and any exceptions.


> Auto Pilot is not a "Push a button and it goes from Airport A to Airport B". It only helps in a few cases.

A USAF C-54 made a complete transatlantic flight (takeoff and landing included) on autopilot back in 1947: https://www.flightglobal.com/pdfarchive/view/1947/1947%20-%2...

Modern Cat IIIc (zero-zero) autopilots handle everything except taxiing and takeoff. And I think the takeoff thing is more political than technical.



> The people leading the development of these horse-drawn carriages should demonstrate that they can stop for pedestrians by jumping in front of them on a closed test dirt-path. If they're not able to demonstrate this, they shouldn't be putting them on public carriageways

Sounds silly when compared against old tech.

Accidents happen, best we can do is try to prevent them.


Imagine if we fully separated car traffic from foot traffic. Would this have happened then?


I mean, you still have issues like wildlife on the roadway.

It's impossible to fully remove the issue. Vehicles still need to react to and reduce accidents, but they will absolutely never eliminate them.


This reminds me of a story I heard in college... an owner of a company that builds table saws demonstrated his saw's safety feature - a killswitch that pulls the blade down into the machine as soon as it detects flesh - by putting his hand on it while it was spinning.



very good point, I would not be surprised if Uber put those cars out with a barely working model to collect enough training data, the human operator to correct such errors was tired and didn't intervene. Some basic driving test for self driving cars or other mitigating factors need to be added IMMEDIATELY otherwise tons more will probably die or be injured. Train operators need to prove they are awake and with attention using some button presses, similar things need to be required for those research vehicle if you want to allow them at all.


human driven cars are currently, as we speak running people over on the streets and they have human drivers who don't particularly want to run over other humans. It was inevitable that this happened, and no matter how many people self driving cars run over will be worth it, since it will still be less than what cars are currently doing in terms of death toll.


There wouldn’t be many human drivers on the road if they had to pass this test as well.


Bad idea. Even Uber corporate VPs shouldn't be exposed to unnecessary injury risk.

https://www.youtube.com/watch?v=_47utWAoupo


goes to show, sdv's have to be perfect in the eyes of the public. you wouldn't seriously recommend adding a test like that to a regular driving licence.

also that testing does happen in the case of every av program i know of. closed obstacle courses with crap that pops out forcing the cars to react. look up gomentum station.

† i did it for you. http://gomentumstation.net/


This was a cyclist. So they should be on bikes.

Just like they take the repair mechanics on the first test flight after a major repair of a big airplane.


Said that few times: how does any AI recognize whether object in front of me is just a white plastic bag rolling in the street, or a baby that rolled out of crib. AI cannot know and will never know. And we cannot have cars drive smoth in traffic if every self driven car will stop before hitting an empty plastic bag.


How do we recognize whether an object in front of a car is just a plastic bag in the wind or a baby?

o At speed we're pretty ok with cars hitting things people drop on the road, examples of cars hitting wagons and babies are already plentiful

o Visual recognition & rapidly updated multi-mode sensor data, backed by hard-core neural networks and multi-'brained' ML solutions, have every reason to be way better at this job than we are given sufficient time... those models will be working with aggregate crash data of every accident real and simulated experienced by their systems and highly accurate mathematical models custom made to chose the least-poor solution to dilemmas human drivers fail routinely

o AIs have multiple vectors of possible improvement over human drivers in stressed situations. Assuming they will bluntly stop ignores how vital relative motion is to safe traffic, not to mention the car-to-car communication options available given a significant number of automated cars on the road -- arguably human drivers can never be as smooth as a fleet of automated cars, and "slamming the brakes" is how people handle uncertainty already

Presently the tech is immature, but the potential for automated transport systems lies well beyond the realms of human capabilities.


I'm surprisingly okay with self-driving cars stopping for plastic bags in the near term.

Self driving tech is up against the marketing arm of the auto-industry, they've got to be better than perfect to avoid public backlash. If they're a bit slower but far safer then I think they'll do well.


actually i slow down for plastic bags. And not because i care for their safety.


Yeah at sufficient speed one can't differentiate plastic from some other object that could damage one's vehicle.


What? AI can know in the exact same way you can know. What are you talking about?


Why? It’s not a problem if Uber kills pedestrians, even in situations where it’s completely avoidable. It’s only (legally) a problem if they’re violating the rules of the road while doing so.


> It’s only (legally) a problem if they’re violating the rules of the road while doing so.

I doubt that's true, but even if it were, I believe the rules of the road are "always yield to pedestrians, even if you have the right-of-way".


In general motorists are not held strongly accountable for killing pedestrians or cyclists. Juries in the U.S. are very reluctant to convict. I'm not sure Uber can rely on that, however. While juries are reluctant to convict drivers, driverless vehicles are another matter. As I said in another comment, they're going to have to invent a crime that puts cyclists and pedestrians positively at fault in these situations.


People jump in front of the NYC subways multiple times per week in an attempt to commit suicide, how often do you read about the train engineers going to prison for that? Literally never. If someone tries to commit suicide by jumping in front of your car then you're generally not legally liable, as long as you're not intoxicated and you pull over immediately and phone it in.


First, I assume the rules for trains are different than for cars. Second, I expect that someone jumping in front of your car is different than running over a cyclist in a bike lane.


I think their (idiotically stated) point was that there are plenty of situation where you can kill someone on the road in an accident and not have any legal liability for it.


This opens up an interesting question going forward. We can't rely on Uber themselves to analyse the telemetry data and come to a conclusion, they're biased. So really, we need self driving car companies to turn over accident telemetry data to the police. But the police are ill equipped to process that data.

We need law enforcement to be able to keep pace with advances in technology and let's face it, they're not going to have the money to employ data analysts with the required skills. Do we need a national body for this? Is there any hope of a Republican government spending the required money to do so? (no)


NTSB has already announced that it will investigate: https://techcrunch.com/2018/03/19/ubers-fatal-self-driving-c...

I guess we'll find out how competent they are at this, but I think they did a surprisingly good job with the previous Tesla investigation: https://dms.ntsb.gov/pubdms/search/hitlist.cfm?docketID=5998...


On the one hand I'd say give it to the NTSB, because historically they are really good at this sort of thing.

On the other hand, I'd wonder if increasing NTSB scope this much would drastically decrease the average quality of NTSB's work. Scaling ain't easy.


Yeah, pilot here. NTSB is the right organization to handle this. The investigators over there do an amazing job of determining root cause from forensic evidence. I assume that will be the process here.


NTSB is there to handle civilian transportation accident investigations. Most automotive accidents are not very "surprising", which leaves them mostly dealing with non-personal transportation accidents. We are about to have a rapid increase in surprising, non-personal transportation accidents, so I seriously hope they are afforded the resources to deal with the influx as AVs come online.


Shouldnt this actually be pretty easy? The system on the Uber, should have a massive amount of cameras, and lidar. Basicly dashcams on speed, recording multiple angles of the accident. I would assume that everything is beeing recorded, for debug/Testing purposes.


NTSB should expand to launch an automotive division.

Bonus: Job creation to replace the jobs lost to automation


NTSB already has jurisdiction here, as well as on all rail accidents involving injuries, in addition to aviation accidents.


Aren't you basically just proposing that the NTSB analyze automobile telemetry, the same way they analyze aircraft telemetry data? Doesn't seem wildly outside the realm of possibility.


The NTSB could certainly do it. But they'd need to expand a lot in a world where self driving cars are an everyday reality, which again comes back to the question of funding.


IMO Uber should be on the hook for funding the NTSB investigation; smaller competitors should be able to get insurance if they need to.


It's a pretty bad look to have the involved parties directly fund the investigation. Some sort of general automated-vehicle tax may be more appropriate.


Not really. The EPA has done this for decades, specifically for Superfund sites. When you can, it's more efficient to have bad actors pay for their bad acts rather than burdening an industry as a whole.


Law enforcement already conducts traffic accident investigations where the involved drivers are biased parties of inconsistent honesty and imperfect memory that can and do both accidentally and intentionally misrepresent the facts.

I don't think self-driving cars and their sensor data, even if they rely on the operator to explain what the car “remembers”, fundamentally shift the landscape.


That sounds pretty much like how things are done as it is.

For instance, take a look at:

https://www.ntsb.gov/investigations/AccidentReports/Pages/HW...

"The Tesla was equipped with multiple electronic systems capable of recording and transmitting vehicle performance data. NTSB investigators will continue to collect and analyze these data, and use it along with other information collected during the investigation in evaluating the crash events."


Don't we already have a huge infrastructure of police, the courts, and insurance companies in place to decide these very things everyday?

I mean, how is this different then all of the other accidents that occur every day? Yes a self-driving car is involved, but do people really think autonomous cars aren't going to be involved in fatal accidents?

Of course they are...but I've always thought that autonomous vehicles only have to be like 10% safer for them to make tons of sense to replace human drivers.


For the same reason we don't leave the investigation of plane-crashes to the attorney general and the courts. We care about more than 'who should we punish for this' in this case.

We want to know what happened, how it happened, how we could have prevented it, how likely it is to happen again, what assumptions or overlooked details lie at the heart of this.

The required level of expertise, effort and precision here are higher than in a regular traffic accident. Moreover, the required skill-set, knowledge base, and willingness to work in a new area here make this an exceptional case.

Finally, the outcome of this will be much more than liability in a single case. This could set the precedent for an entire field of industry. This could be the moment we find out self-driving cars are nearly a pipe-dream, or it could be the moment we kill self-driving cars at the cost of millions of preventable traffic accidents. This investigation just might determine a lot, again, that makes it exceptional.


Who would be liable? The owner of the AV? The manufacturer?


This sounds like something the NHTSA should do.


> We need law enforcement to be able to keep pace with advances in technology

Agree, Kumail Nanjiani (comedian) has a great rant on twitter about exactly this, ethical implications of tech-

> As a cast member on a show about tech, our job entails visiting tech companies/conferences etc. We meet ppl eager to show off new tech. Often we'll see tech that is scary. I don't mean weapons etc. I mean altering video, tech that violates privacy, stuff w obv ethical issues. And we'll bring up our concerns to them. We are realizing that ZERO consideration seems to be given to the ethical implications of tech. They don't even have a pat rehearsed answer. They are shocked at being asked. Which means nobody is asking those questions. "We're not making it for that reason but the way ppl choose to use it isn't our fault. Safeguard will develop." But tech is moving so fast. That there is no way humanity or laws can keep up. We don't even know how to deal with open death threats online. Only "Can we do this?" Never "should we do this? We've seen that same blasé attitude in how Twitter or Facebook deal w abuse/fake news. Tech has the capacity to destroy us. We see the negative effect of social media. & no ethical considerations are going into dev of tech.You can't put this stuff back in the box. Once it's out there, it's out there. And there are no guardians. It's terrifying. The end. https://twitter.com/kumailn/status/925828976882282496?lang=e...

It is scary, big tech orgs have no incentive or motivation to even consider ethical implications, whats worse is the American consumer has shown repeatedly that you're OK to do really shady stuff and as long as it means a lower price product/service for the consumer. We're in a kind of dark age of tech regulation and he's right it is terrifying.


Maybe something like the IIHS ( https://en.wikipedia.org/wiki/Insurance_Institute_for_Highwa... ), which is funded by insurance companies?

Though that would of course depend on how insurance even looks for SDCs. Maybe big companies like Uber will self-insure.


Perhaps a company like open ai or some other company setup by all the players involved could run the analysis in cases like this.



State governments, especially those authorizing self driving cars, could authorize this to some extent.


How about we wait for the problem to present it and actually cause harm before we throw law enforcement, government regulation, a regulatory body, etc at it?


A pedestrian struck and killed doesn’t count as “harm”?


There's a difference between Uber playing fast and loose and industry wide bad practices.

The latter needs to be addressed by government. The former can be addressed by concerned parties using their political connections to get existing law aggressively enforced.


Important and and missing:

"[Uber] said it had suspended testing of its self-driving cars in Tempe, Pittsburgh, San Francisco and Toronto"[1]

1: https://www.nytimes.com/2018/03/19/technology/uber-driverles...


that's SOP for every av program. have incident? ground the fleet. doesn't matter why, or who's at fault.


Again... 'Uber has temporarily suspended the testing of its self-driving cars following the crash of one of them in Tempe, Ariz. The ride-hailing company had also been conducting experiments in Pittsburgh and San Francisco.'

I was resumed shortly after.

27 Mar 2017 (1 year ago)

https://spectrum.ieee.org/cars-that-think/transportation/sel...


FTA:

"The Uber vehicle was reportedly driving early Monday when a woman walking outside of the crosswalk was struck.

...

Tempe Police says the vehicle was in autonomous mode at the time of the crash and a vehicle operator was also behind the wheel."

That's a very bad look; the whole point of self-driving cars is that they can react to unexpected circumstances much more quickly than a human operator, such as when someone walks out into the road. Sounds like Uber's platform may not be up to that standard yet, which makes me wonder why they're on public roads.

On the other hand, it sounds like it happened very recently; I guess we'll have to wait and see what happened.


> That's a very bad look; the whole point of self-driving cars is that they can react to unexpected circumstances much more quickly than a human operator, such as when someone walks out into the road.

Some of these accidents are unpreventable by the (autonomous) driver. If a pedestrian suddenly rushes out into the street or a cyclist swerves into your path, the deciding factor is often simply the coefficient of friction between your tyres and the road.

The autonomous vehicle and the human attendant might have made a glaring error, or they might have done everything correctly and still failed to prevent a fatality. It's far too early to say. It's undoubtedly a dent to the public image of autonomous vehicles, but hopefully the car's telemetry data will reveal whether this was a case of error, negligence or unavoidable tragedy.


>If a pedestrian suddenly rushes out into the street or a cyclist swerves into your path, the deciding factor is often simply the coefficient of friction between your tyres and the road.

This only true for the uninitiated, never let it get to that. I drove for Uber/Lyft/Via in New York city so I can experience and study these situation. These sort of accidents are preventable. The following is the basic:

1.) Drive much slower in areas where a pedestrian and cyclist can suddenly cross your past.

2.) Know the danger zone. Before people "jump into traffic" or a cyclist swerve in front of you, they have to get into position, this position is the danger zone.

3.) Extra diligence in surveying the area/danger zone to predict a potential accident.

4.) Make up for the reduce speed by using highways and parkways as much as possible.

It helps that Manhattan street traffic tends to be very slow to begin with. Ideally I will like use my knowledge to offer a service to help train autonomous vehicles to deal with these situation. It has to be simulated numerous times in a closed circuit for the machine to learn what I've learn intuitively driving professionally in NYC.


> 1.) Drive much slower in areas where a pedestrian and cyclist can suddenly cross your past.

So essentially everywhere? I have seen pedestrians walk unexpectedly into traffic on high-speed 4-lane divided roads with no crosswalks.

> 2.) Know the danger zone. Before people "jump into traffic" or a cyclist swerve in front of you, they have to get into position, this position is the danger zone.

What "position" are you referring to? There are places where you can account for or predict pedestrians. There are also places where you cannot, such as when someone walks into traffic from behind a tall parked vehicle, where you have no chance to see them in advance.


Simply counting traffic fatalities suggests that crazy pedestrians causing unavoidable accidents cannot be common, even if every single pedestrian accident were both unavoidable and the pedestrians "fault" (although I'd argue that ethically it must be primarily the vehicles fault, but that's another story).

And that'ss with humans behind the wheel: lazy, distracted, slow to react fleshbags that we are.

I'm not sure that a truly unavoidable accident would occur even once a year in a fictional world in which all drivers were perfect and had millisecond reaction speeds.

What is obvious however, is that these situations are so rare as to be irrelevant. In practice accidents are avoidable by the driver of the vehicle - or at least avoidable to such an extent that it's not worth considering the other cases.

Also: although I personally don't object to some rational victim blaming I think it's a little distasteful that we're already speculating about how this must be the victims's fault, when there's simply not enough evidence to make that kind of determination yet. Let's not forget that part of the privilege of being allowed to participate in traffic implies a responsibility not to kill people even when they behave unexpectedly.

For some statistical perspective: if human drivers had as many fatal accidents per mile as uber has, then the average male driver would kill 1 person in his lifetime (men drive more). Clearly: that's absurd; people may cause too many accidents, but not nearly that many - and that's being rather charitable to uber's self-driving vehicles, since they have back-up drivers and thus don't count complicated traffic situations, and safety drivers, and thus may well have caused more accidents by themselves. So going purely by the unusual-ness of such an accident with so few miles, I'd say the initial assumption must be that this is likely a bug in uber's car, even if I'm sure there were contributing factors.

Edit: I guess it's not surprising wikipedia has stats on the influence of alcohol on fatalities, but it tops out at 4 times the legal limit - at which point human drivers are still safer than this (sample size of one...) uber record so far. :-/


> Simply counting traffic fatalities suggests that crazy pedestrians causing unavoidable accidents cannot be common

Of course not. I'm not saying they're common, merely that they exist. I do not agree with the idea that accidents would go away if drivers were just trying harder, though. There are legitimate unavoidable accidents, and there are also limits to practical human driving.

> Also: although I personally don't object to some rational victim blaming I think it's a little distasteful that we're already speculating about how this must be the victims's fault

To be really clear, I am not blaming the victim here. I have no idea what happened. I'm actually very inclined to blame Uber, though I recognize that's just my personal bias against them.


Why can't you slow down when driving past tall parked vehicles that you can't see through? If you are going slower you will have a chance to see them in advance, select a speed where your stopping distance is less than the length of the obstruction and be prepared to brake. You will have no chance of striking all but the most willfully suicidal of pedestrians.

Near my house there is an arterial I often cross where a large bush on the corner of the intersection obscures the view to the left from the stop sign roughly 10 feet back. I don't just stop at the sign then YOLO through the intersection, I stop, then creep forward, first looking for pedestrians or bicyclist that might step out from the bushes, then at the edge of bushes I stop again and look again both left and right for cars and bikes or pedestrians in the far lane before proceeding across. I often have to stop another time between the sign and the bushes for the pedestrian or bicyclist that just emerged, if I was focusing on getting across the arterial and beating cross traffic I would have killed every one of them.


At what speed do tall vehicles suddenly become transparent?

The idea that cars should drive past tall vehicles at 5 mph is slightly ridiculous. I have never ridden in a car with someone who constantly adjusted their driving speed based on the cars parked on the road. Choosing a reasonable speed? Of course. Extra care at obvious obstructions? Absolutely. Slowing traffic to 5mph because a van happens to be parked? No.


It is not ridiculous if you are driving in a narrow lane close to the obstacles. When I am driving on a residential (one lane, parked cars on both sides) street and I am approaching a tall opaque van I start by noticing if anyone is around it as I approach, when I have better sight lines, then as I get near it I absolutely slow down from an average speed of 15-20mph to well under 10mph and yes, depending on the situation sometimes as low as 5mph or slower. When I am on a faster road with two or more lanes there is often room to move laterally in these situations and so my speed reduction is less extreme, but I absolutely do still slow down as I pass these vans/trucks, cover the brakes and also check my blind spot to see if there is room for evasive maneuvers should they be required. If I see people in the area as I approach and have reason to believe they might try to cross, enter or exit a car, or otherwise be near the lane of traffic I will often change lanes to the left if possible.


While what you are saying can reduce the number of occurrences, it does not change what your parent post said.


You're making an observation about what it's like in an urban area. Sadly, in a suburban area like Tempe the presence of any pedestrian anywhere is unexpected.


The ASU campus area around Tempe has diverse traffic conditions. There are spots near the campus where you would expect to stop at every crosswalk, while in some roadways, a pedestrian would be completely unexpected.

Around the bars, people stumble into traffic all the time. Any driver, automated or human, would need to anticipate that.


> If a pedestrian suddenly rushes out into the street or a cyclist swerves into your path, the deciding factor is often simply the coefficient of friction between your tyres and the road.

This holds iff the control input you apply is only braking. Changing the steering angle is generally far more effective for the "pedestrian darts out from hidden place onto road" situation. It's far better to sharply swerve away to ensure that there's no way the pedestrian can get into your path before your car arrive there than it is to stand on the brakes and hope for the best.

Indeed, the faster you're moving, the more you should consider swerving away over braking -- take advantage of that lethal speed to clear the pedestrian's potential paths before he can get into yours.

Yes, this intentionally violates the letter of the traffic laws (and might involve colliding into a a parked or moving automobile on the other side of the road) and also involves potentially unusual manoeuvring on a very short deadline; but it's far better to avoid a vehicle-pedestrian collision even if it's at the cost of possibly busting into the opposing lane / driving off the road / hitting a parked car. Decently experienced drivers can do this, I can do this (and have successfully avoided a collision with a pedestrian who ran out between parked cars on a dark and raining night), there's no fundamental reason that computer-controlled cars can't do this.


I think you touched on the fundamental reason self driving cars cannot currently do this. It would mean programming self driving cars to, in some circumstances, break the law.

I agree with the thrust of your comment though, and think that a change in the law may be required to allign incentives for machines as well as people to drive humanely.


They already do that for speed limits - iirc Waymo cars will 'go with the flow' to a certain extent if everyone around them is speeding. Swerving out of your lane (assuming it is safe to do so) to avoid a collision seems pretty straightforward.


Which eventually gets the vehicle into conflict: Exhibit A, amongs other things going over the speed limit, oops, now someone's dead.


Hmm, interesting. Google Street View seems to have 45 mph speed limit at that location...https://www.google.com/maps/@33.4350531,-111.941492,3a,75y,3...


>Yes, this intentionally violates the letter of the traffic laws...

...so you probably won't find any support for it in this crowd.

> there's no fundamental reason that computer-controlled cars can't do this.

Sure, there's no fundamental reason you can't swerve but you need to identify the thing in the road you're trying not to hit before you can decide whether to swerve or brake. Self driving cars can't yet reliably ID things well enough to take the only evasive action they know (stopping).


Obviously if there's room and time to swerve and avoid the pedestrian, you should, but swerving into an oncoming car? That sounds extreme. There's no way I'd risk killing myself and anyone in the oncoming vehicle to save one person who made a poor decision. In any situation where there's other traffic and off-street foot traffic, swerving might just cause more harm than good. Everything is situational, and I'm sure there are bound to be cases where there is no sensible option for avoiding a fatality because of exceedingly bad timing on the pedestrian's part.


This is a good point. Robocars will have more evidence for their defense in such a case than human drivers would have. "Oh there was a kid who jumped out? And that's why you plowed into this parked car? Likely story!"


That’s technically true but the number of truly unavoidable cases is orders of magnitude lower. With human drivers, “jumped out in front of me” really means inattention 99.9% of the time and police departments historically tend to be unlikely to question such claims. (For example, here in DC there have been a number of cases where that was used to declare a cyclist at fault only to have private video footage show the opposite - which mattered a lot for medical insurance claims)

With self-driving cars this really seems to call for mandatory investigations by a third-party with access to the raw telemetry data. There’s just too much incentive for a company to say they weren’t at fault otherwise.


you seem to be giving uber a big benefit of the doubt. these autonomous cars generally go slow. tempe has flat roads with great line of sight and clear weather. coefficient of friction? highly doubt it. the sensors should be looking more than just straight a head


It's not unreasonable for there to an expectation of basically zero accidents of this nature during testing in cities. The public puts a huge amount of trust in private companies when they do this. And, pragmatically, Google, Uber, etc, all know that it would be horrible publicity for something like this to happen. One would think they'd be overly cautious and conservative to avoid even the possibility of this.

Lastly, the whole point of the human operator is to be the final safety check.

You're right that we have no idea of the cause until the data is analyzed (and the human operator interviewed). Yet, my first thought was, "Of course it'd be Uber."


If a pedestrian suddenly rushes out into the street or a cyclist swerves into your path, the deciding factor is often simply the coefficient of friction between your tyres and the road.

I mentioned in another comment that something I use to try to improve my own driving is watching videos from /r/roadcam on reddit, and trying to guess where the unexpected vehicle or pedestrian is going to come from.

Here's an example of a pedestrian suddenly appearing from between stopped cars (and coming from a traffic lane, not from a sidewalk), and a human driver spotting it and safely stopping:

https://www.youtube.com/watch?v=wYvKPMaz9rI

Why can't a self-driving car do this?


Agreed that it's too early to really say one way or another. In maybe 100k miles of urban driving I've had one cyclist run into my car and a girl on her phone walk directly into the corner of the front, I was at a complete stop watching them both times.

Until there's a detailed report it's really hard to say if it was preventable or not - but I think regardless the optics are bad and this is going to chill a lot of people's feelings on self driving whether or not that is an emotion backed by data.


The hope is that AV can see 360 and observe things outside of blind spots further away. So the kid running from a yard into the road after a ball should be safer, but a person walking out from behind a parked truck wouldn't.


If you have good judgement, it's quite easy to prevent a fatality. I.e. I go slower in places where I know pedestrians might jump out.


Lower the chances of, not prevent.


Sure, but in the above user's hypothetical, that would mean that in such an area a concerned human driver with a greater ability to predict general human behavior would have a statistical safety improvement on the autonomous vehicle, which might not understand, for instance, that since it's a Friday night and the big game just ended and I'm in the city center I should be more careful than usual because there will be more intoxicated people.

Which is a lot of high level reasoning and inference with information from a variety of sources which aren't on the face of it strictly related to the driving task.


That's the type of thing that an excellent driver thinks about, but about 10x more thinking than I believe the average driver does.


Exactly, you can never reduce accident rates to zero, even if you slow to a crawl or stop (now you're endanger traffic behind you). Debris could fall on your car and make the steering non-responsive. A flat tire could burst at any speed and cause a bicyclist to veer into traffic/off a cliff/whatever.

When it comes to human drivers we deal with probabilities, but for some reason people want absolutes with autonomous ones.


This seems like the opinion of someone who drives too fast?


Sounds like the opinion of someone who knows minimum braking distances still exist even when going slowly.


If pedestrians or especially children are present, one should be driving very slowly. 15mph sounds about right. At that speed one is unlikely to kill, since even if a pedestrian "jumps out" (an event that is vanishingly less common than drivers not paying attention), one has enough time to stop.

Most drivers drive too fast. (Me too!) Most drivers have not killed anyone. All drivers who have killed someone, were driving too fast at the time. Many drivers will disagree with me, but they are simply wrong about appropriate speeds, and we will all be safer when robocars are driving for them.


Even when the minimum braking distance is not small enough to avoid hitting a pedestrian, a lower speed will impart a lower kinetic energy vastly reducing the odds of fatally injuring the victim.


This statement is either very poorly thought out, or needlessly accusatory and inflammatory.

I would say that if you claim that a massively complex system like public traffic, with thousands of participants, many of them badly or completely untrained, can be organized in such a fashion that any and all accidents can be prevented, the burden falls on you to show how.


This subthread is not about the massively complex system. Rather we are discussing the good judgment of individual drivers. Even if my statement were not literally true (the best kind, and it literally is), it would still be good practice for all drivers to commit themselves to driving slowly enough to prevent collisions with pedestrians.


This is the reason robocar firms will fight to keep all their data private. If it were public, researchers could show and personal injury attorneys could argue persuasively that there are some speeds that would never cause fatal collisions. Since those speeds will be slower than most passengers wish to travel, this mode of travel will be more vulnerable to lawsuits.


The cars telemetry data will undoubtedly point to a unavoidable tragedy if it's just uber analyzing it.


Makes me wonder the likelihood of the public ever seeing that telemetry if it appears incriminating to Uber.


The data will have been lost in an unfortunate accident, which Uber will promise never to repeat.


...despite them taking this incident very, very seriously.


That wont be the case, the data is an evidence in a criminal investigation.


Making evidence disappear is indeed inconceivable. Especially for a firm of Uber's stature.


This stuff needs to be regulated sooner rather than later.


Yet this tragic accident will now set a precedent and finally start to scratch the delicate unspoken question: Who is responsible now?


Likely the more apt word is liable.

In law you split that into criminal and civil. Very unlikely there will be any criminal charge, hence no criminal liability but it’s possible.

Civil liability is more interesting, but if the deceased’s estate brings suit it opens up a can of worms in the discovery process. Every internal communication, prior incident, etc...

At the end of the day it’s not much different than any other vehicular fatality case, but there will be a defective product component (again nothing unheard of in the case of vehicular fatalities). At the end of the day the car will be insured for accidents, so we will probably learn more about the insurance coverage for self-driving cars.


Corporate manslaughter is still criminal.


Obviously Uber in this case, but as mentioned in response to a separate article, AI's must be like pets or children and have a liable guardian.


Not obviously. Let's wait to judge until we have the facts. The pedestrian was outside of the crosswalk. There's not a whole lot of information about the events that lead up to the accident. It is possible that a pedestrian is at fault if they stepped in the way of moving traffic outside of a crosswalk. It's also possible Uber's cars are not up to the task of driving on public roadways.


The penalty for walking outside a crosswalk is not death.


Arizona requires drivers to "exercise due care to avoid colliding with a pedestrian". If a fatality results, but due care was taken by the driver to avoid it, then the tragic accident is just that, an accident.

Like my comment says, let's wait until we have facts before passing judgement.


Sure, we don't know many of the details.

But from this distance, things don't look good for the technology or the future of autonomous trials on public roads.

The evidence that due care was not taken will pretty much be the existence of the fatality.

There surely will be video, from the car itself and also perhaps third party security or traffic cameras. The level of carelessness we are going to have to see in order for a jury to blame the victim will be pretty high.


I agree with the final paragraph in your comment. All I'm saying is let's reserve judgement until we know what happened.

As for a fatality proving that due care was not taken, I agree. It doesn't tell you who failed to take due care though.


Who would the guardian (and therefore responsible) be - the driver or the car manufacturer? Sounds like a circular argument to me!


In this case, the driver is an Uber employee, and the automation is Uber-created and deployed, so the driver is the car manufacturer.


I’m kind of in the Elon Musk camp here where you gotta break some eggs to make an omelette? Human-driven cars kill a lot of pedestrians today, but we can actually do something to improve the human-recognition algorithms in a self-driving car.

As long as self driving cars represent an improvement over human drivers, I’m ok with them having a non-zero accident date while we work out the kinks.


The problem is the blame game. Human behind a wheel hits a pedestrian, if they're found at fault then that "horrible inattentive/drunk/whatever driver" goes to jail for the death of another human being. It never makes anything "right", and I won't even start on how a jail sentence regardless of length can ruin your life in the US, but the public as a whole gets a feel good that "justice has been served".

How do we handle this for autonomous vehicles? Do we just fine/sue the company that made the vehicle/developed the software? Do we send imperfect human developers to jail because they made a mistake, even if in the grand scheme of things they have saved lives compared to humans being behind every action made by a vehicle?

A big part of the public image for autonomous cars is increased safety, any deaths at their hands starts raising where and how to place the blame - a subject I think very few are prepared for right now, which is likely part of why Tesla explicitly states autopilot needs a human driver present right now, and why Google has been extremely cautious with operator-supervised tests up until recently.


People get found not at fault for hitting pedestrians all the time.


...where breaking eggs means killing people?

There's no way for autonomous driving to become a reality without people dying?

I think we could achieve autonomous driving without needless deaths along the way. It might take longer, but I'd say it's worth it to, I can't believe I have to make this argument, avoid killing people.


In some sense, yes.

But is it any surprise that Uber had the first self driving car to kill a human being? You can cook in an orderly, careful way, or you can turn the kitchen into a disaster zone and expect other people to clean up for you.

I know which I prefer when it comes to human lives.


While I agree that the AV safety record is better than the human one, how is "you gotta kill random bystanders to make an omelette" okay just for this one industry? (As opposed to e.g. medical or military testing)


Military testing is typically done by putting a few pieces of tested equipment in the field. Russia is doing exactly that in Syria with their new Su-57. Pretty sure it's where a few other of the planes got a first taste of live combat, that's part of testing. I'm sure it'll either cause some unintended fatalities.

And, medicine takes tens of thousands of lives unintentionally, if I recall correctly.


So, by this logic, AZ is now a war zone?

Medicinal testing kills tens of thousands?


Not at all. You had mentioned military testing, which is safe in early stages, and I addressed that first: the testing process does follow through into live environments. Where things can go wrong.

But no, medical testing doesn't kill tens of thousands, practiced medicine does. I may have read your post wrong an figured you meant medicine in practice separately from military testing. There's research to suggest mistakes are the third leading cause of death in the US [1]. My point wasn't so much that killing random bystanders is okay, but that there's a level of unintended death in both, and not just in testing. Society decides what's acceptable, and if for miles driven the number of deaths that self driving vehicles cause is lower than the number of deaths caused by human drivers per miles driven, well... I'd say that's a fair way to look at it.

1. http://www.bmj.com/content/353/bmj.i2139


Okay, that does sound far more reasonable than comparing dead people to a part of a recipe.


[flagged]


Omelet metaphor aside,that doesn't really seem fair. We're talking about accepting more than 0 deaths caused by AI because the death rate should still be much lower than with humans driving. IOW, the perfect is the enemy if the good.


>> Sounds like Uber's platform may not be up to that standard yet, which makes me wonder why they're on public roads.

They're on public roads because they've decided that deploying their tech is more important than safety.

I don't see any other explanation. We know that it's pretty much impossible to prove the safety of autonomous vehicles by driving them, so Uber (and almost everyone else) have decided that, well, they don't care. They'll deploy them anyway.

How do we know that? The report by RAND corporation:

https://www.rand.org/pubs/research_reports/RR1478.html

  Key Findings

  * Autonomous vehicles would have to be driven hundreds 
  of millions of miles and sometimes hundreds of billions 
  of miles to demonstrate their reliability in terms of 
  fatalities and injuries.

  * Under even aggressive testing assumptions, existing 
  fleets would take tens and sometimes hundreds of years 
  to drive these miles — an impossible proposition if the 
  aim is to demonstrate their performance prior to 
  releasing them on the roads for consumer use.

  * Therefore, at least for fatalities and injuries, 
  test- driving alone cannot provide sufficient evidence 
  for demonstrating autonomous vehicle safety.


That report ends by saying essentially, "it may not be possible to prove the safety of self-driving cars". [1] So the value here is questionable and the same logic could apply to anything with a low frequency of occurrence. The value of air bags by this measure was not proven until they were already mandated.

[1] "even with these methods, it may not be possible to establish the safety of autonomous vehicles prior to making them available for public use"


The difference, of course, is that an airbag can't take control of a car and run someone over.

More to the point, the report notes that new methods to determine the safety of self-driving cars are required.

Which the industry is not exactly falling head over heels trying to develop.


The report also says that these hypothetical new methods may not be able to prove safety. It’s not a straightforward problem. How do you prove that you’ve reduced (or at least not increased) a problem that occurrs so infrequently?

Realistically no one will trust “new methods” and establishing their relevance is really difficult. I would imagine that most of these companies are running lots of simulations, because why wouldn’t you? But how many people will see that and trust it more than data gathered on the road?


Indeed, there are very few reasons to trust simulations to tell us anything about safety in the real world.


The main problem with self-driving cars is that they can't "read" humans' body language. A human driver can see pedestrians and cyclists (and other cars) and have a rough idea of what they're likely to do in the next few seconds, i.e. the pedestrian leaning out on the crosswalk curb is likely to step into the road soon. Even a reaction time of milliseconds can't make up for the (currently exclusive) human ability to read other humans and prepare accordingly.


They also fail to "write" human body language. Nobody else can predict what the autonomous vehicle will do.

It gets worse when a person is sitting in what appears to be the driver's seat. If the car is stopped and that person is looking toward another passenger or down at a phone, nobody will expect the vehicle to begin moving. Making eye contact with the person in that seat is meaningless, but people will infer meaning.


Great point. So much of our daily driving relies on the exchange of subtle social cues with other drivers.


that is an interesting point. cycling, I often am forced to rely on reading the driver's intentions-something I don't really want to have to do; signals obtained from person sitting in the driver seat but not operating the vehicle could be totally irrelevant to predicting the behavior of the vehicle itself (and cars are not equipped to really signal those things very well).


I have never driven a car. I walk, bike, skateboard everywhere. If there isn't a light I require human feedback before walking in front of a car. Normally I wave and they wave back. They know I am there.. I can walk.


Humans also can't read humans' body language. A pedestrian waiting at a corner isn't waiting for the weather to change. They are waiting for passing cars to stop, as required by law. But passing cars speed by instead of stopping, unless the pedestrian does a lunge into the street -- preferable a bluff lunge, since most drivers still won't stop, preferring to race the pedestrian to the middle of the lane.


With sufficient data, I'd expect self-driving cars to be better at predicting what such leans mean. Moreover, for every one human driver who notices such a lean, there may be another human driver that doesn't even notice a pedestrian who has already started walking.


For someone so confident in the application of data, you sure just made up some data to support your point.


True, but commuting daily for the past 30 years sure seems to validate it.


This comment seems unnecessarily hostile. But, still, I'll change my previous comment from "there is" to "there may be" so that it's clear that I'm not claiming any data. Just guessing (reasonably, based on experience).


I walk to work every day, and I can assure you that most human drivers have zero awareness of what pedestrians are about to do.

Even on crosswalks. If I just strolled out on a crosswalk without looking, and waiting for drivers who were paying no attention, I’d be long dead.


OT, but I would love to see how self-driving AIs handle something like Vietnam moped traffic and pedestrian crossings. The standard behavior for pedestrians is to walk slowly and keep walking -- stopping, even if it seems necessary to avoid a speeding driver, can be very dangerous, as generally, all of the street traffic is expecting you to go on your continuous path. It's less about reading body language than expectation of the status quo:

https://www.youtube.com/watch?v=nKPbl3tRf_U


Are you sure that supervised learning does not create the same classification capabilities in self driving car AIs?


I'm speaking more of the current state of the self-driving cars I've been in - future improvements will likely narrow the gap or eventually surpass human abilities in most scenarios. How or when is the main question.


Only through this HN thread did I learn that AI is actually barely used in current self-driving car software. This was after I wrote that comment.


Could a human have reacted fast enough to stop for someone jumping out in front of them? If the person jumped out so fast that nobody could have possibly reacted in time, then it's not a stain on the technology- even with an instantaneous application of brakes, a car still takes a while to come to a stop. If the human jumped out ten seconds earlier and was waving her hands for help, then it's an issue.

"the whole point of self-driving cars is that they can react to unexpected circumstances much more quickly than a human"

This statement isn't really true. Do you think Uber is investing in this because it makes their passengers safer? No, they are pretty much immune to any car crashes with a human driver, the risk and insurance hit is assumed by the driver. They are doing this to save money. They don't have to pay a driver for each trip. The safety aspect is just a perk, but Uber will be pushing this full force as long as the self-driving cars are good enough.

I'm more concerned that Uber will "lose" the video evidence showing what kind of situation it was, and we'll never be able to know if a human would have had ample time to react.


I've been saying this from the beginning: it's not enough for self-driving cars to be "better than the average driver". They need to be at least 10x better than the best drivers.

I find it crazy that so many people think it is. First off, by definition like 49% of the drivers are technically better than the "average driver".

Second, just like with "human-like translation" from machines, errors made by machines tend to be very different than errors made by humans. Perhaps self-driving cars can never cause a "drunken accident", but they could cause accidents that would almost never happen with most drivers.

Third, and perhaps the most important to hear by fans of this "better than average driver" idea, is that self-driving cars are not going to take off if they "only" kill almost as many people as humans do. If you want self-driving cars to be successful then you should be demanding from the carmakers to make these systems flawless.


If 1% fewer people died with self driving cars that would be 400 fewer deaths a year. That's absolutely a good change.

I agree that the _goal_ should be 10x or even 100x better than the best human drivers, but better than average would result in net good and it's hard for me to see an argument in which that's not true because it's less net suffering/death.

>If you want self-driving cars to be successful then you should be demanding from the carmakers to make these systems flawless.

We're still talking about tons of metal moving at very high speeds around other vehicles. That's just not possible. Should my self driving car handle a catastrophic tire failure at 90mph better than me? Absolutely. Is that still a situation that's likely to result in a crash regardless of the best inputs on a compromised system? Yes. There will _always_ be situations in which fatalities can occur with cars or any other kind of vehicle no matter how well they're engineered.


Your first point depends on what question we're asking. Are we asking which is morally justifiable? Or are we asking what the public will accept?

Having seat belts is obviously statistically safer, but when they were first introduced, many people loudly protested that they'd rather be thrown clear in an accident.


Even if, worldwide, self driving cars only caused there to be 1 less death per year, why on Earth would you not want that? What's the advantage to you in having more people die while waiting for self driving cars to be perfect?


> First off, by definition like 49% of the drivers are technically better than the "average driver".

No, that’s not the definition of mean, that’s (something like) the definition of median.

In a skewed distribution (which driving skill may very well be), the mean can be very far from the middle. If a relatively small number of people are extraordinarily bad at driving, most people are above average. If a relatively small number of people are extraordinarily good at driving, most people are below average.

Why this nitpick matters: if the leading point of your thesis depends on is quantifying how many people are above average, you should really know if the average is at the 10th, 50th or 90th percentiles (or somewhere else entirely).


Median*

Average means something else, technically.


Median is a type of average, as is Mean, as is Mode.


Would you really use mode to calculate "average"? Ever? I'd have half a mind to slap someone doing that.

And I strongly doubt "average [X]er" would be interpreted as a median by most people. (Except when median and mean overlap)


Context is important.

The quote is "by definition like 49% of the drivers are technically better than the "average driver"." It's obvious to anyone who's not pedantic that they're referring to the median.

As for an example of a mode average: "The average participant's favorite polygon was a triangle."


> The quote is "by definition like 49%...

That's not the original use. That's a response to the (unattributed) idea that self-driving cars would need to be "better than the average driver". In the original context of people giving thresholds for self-driving cars, I disagree that median is what is meant.

> As for an example of a mode average: "The average participant's favorite polygon was a triangle."

But that mode might be 5 out of 40, and I'd just call that a lie. In a lot of distributions the mode just gives you the biggest or smallest number. And it's affected a lot by bucket size; if I measure "how long did you sleep last night" with a resolution of seconds, the mode is probably going to be 0, which is hilariously misleading. Mode, much like a broken clock, might pick a suitable number sometimes, but all the factors you need to check to see if it's suitable basically render the idea of 'mode' redundant. Just use those factors to pick your number. Mean and median are pretty reliable in being useful. Mode isn't.


> Would you really use mode to calculate "average"? Ever?

Of course, to impute a categorical variable in a data set with empty values, for example.


Sure, and that’s what average may refer to before you drill in with a specific definition. If I say by definition, rectangles are technically polygons with 4 edges of equal length, you’re going to say “technically” I meant a square.


The bicyclist did not "suddenly" jump out in front. The Uber car did not sense the bicyclist.


Video says bicyclist, written article says pedestrian. Video says they don't know if a person was behind the wheel, written article says there was a person behind the wheel.

There's enough discrepancy here that I'm not sure what happened until we see the onboard data (unlikely to be released to the public) or a statement from a trusted public official like the Tempe police chief.


Police have clarified that the victim was walking her bike across the street: https://twitter.com/AngieKoehle/status/975824484409077760


I have seen no evidence that any autonomous vehicle currently deployed can react faster than an alert and aware human. The commentariat tends to imagine that they can, and it's certainly plausible that they may eventually be. But I've never seen anyone official claim it, and the cars certainly don't drive as though they can quickly understand and respond to situations.


"Alert and aware human" is already a high standard, given how most humans drive in routine mode, which is well understood to be much worse than "alert and aware".

From what I've seen I wouldn't trust autonomous cars to "understand" all situations. I would trust Waymo cars to understand enough to avoid hitting anything (at a level better than a human), at the risk of being rear-ended more often. Everything I've seen from Tesla and Uber has given me significantly less confidence than that.


> I have seen no evidence that any autonomous vehicle currently deployed can react faster than an alert and aware human.

The argument I've always heard is that an autonomous systems will outperform humans mostly by being more vigilant (not getting distracted, sleepy, etc.) rather than using detectors with superhuman reaction times. Obviously, whether or not this outweighs the frequency of situations where the autonomous system gets confused when a human would not is an empirical question that will change as the tech improves.


Lots of people imagine both: that the system will never be distracted and also that it will have superhuman reaction time.

And, like, once the systems are mature and hardware has evolved and so forth, I think that's right. It will. It might today under certain circumstances, like if it gets a really unambiguous sensor input (or it might not).

But I've never heard anyone actually associated with a driverless car program assert that their vehicles have superhuman reaction times today, and the vehicles drive extremely cautiously. I think it's likely that due to their difficulty in understanding their sensor readings, if you look at total time necessary to make a course correction from the point when an obstruction first could be noticed by an alert human driver, driverless cars are not winning and may be substantially losing in at least some cases.


Not a truly autonomous vehicle example but this is a case where most likely the car reacted before the driver was even aware of a problem: https://www.youtube.com/watch?v=APnN2mClkmk

I agree with the sentiment though. This has been a major selling point for this technology, but it has not been sufficiently demonstrated yet.


From the video, it looks like the car noticed the crash at the same time a human would, but the car decelerated immediately, whereas the human might have slower reaction time.

(It certainly didn't predict the crash "seconds before", as the car accelerated into the imminent colission, until <1sec before collision.


I live and commute in Waymo country, and see evidence of quick reactions, though I can't say for sure whether it's an alert human taking over. Mostly, the Waymo vehicles still drive conservatively.


Consider that autonomous braking systems know the distance required to stop and activate appropriately. Try getting a human to mimic that.


Humans correctly gauge stopping distance and "activate appropriately" like tens or hundreds of billions of times per day. Try again.

Something that is endemic in HN discussions of driverless cars is commenters who dramatically overestimate the dangers of humans driving. Like an order of magnitude or more. You see it all over this thread, in which people imagine that killing one pedestrian in like let's say 10 to 20 million miles driven (with those miles overwhelmingly done in unchallenging conditions) constitutes "vastly safer than human drivers" rather than "vastly less safe than human drivers."


> Humans correctly gauge stopping distance and "activate appropriately"

Toyota has enough UX data that they added "brake assist", turns out that a lot of accidents happen when the user stomps the brake, but then releases, just never presses it hard enough to come to a complete stop, or was trained before anti lock brakes.

https://www.youtube.com/watch?v=grcuorbrYxA

Another scary thing about braking is that most cars probably don't have their brake fluid changed often enough; most owners and even dealers treat it as a lifetime interval, when it may be closer to an annual thing interval.


Where did you get 1 in 10million miles from?

Driverless cars simply don't have the mileage yet to prove they are safer than driverful cars. But there's also no indication they are less safe, based on casualty rate so far.


Waymo said that they'd hit 4 million miles back in Nov 2017, they seem like they've done the most miles, there are several other contenders, so I took a wild guess, trying to err on the side of overestimating # of miles.

There is an indication that they're less safe! They have (very conservatively) 5x the number fatalities per mile driven! Now, look, the error bars on that estimate are of course massive. It is plausible that they are much safer and they just rolled the dice and got unlucky. But this is data, as long as you include the error bars.


> That's a very bad look; the whole point of self-driving cars is that they can react to unexpected circumstances much more quickly than a human operator, such as when someone walks out into the road.

No, the whole point of self-driving vehicles is that firms operating them can pay the cheaper capital and maintenance costs of the self-driving system rather than the labor cost of a driver.


I imagine these vehicles (especially test vehicles) are also recording regular video and therefore getting a clear picture of what happened should be straight forward.


This is Uber, who designed systems for remotely wiping entire offices when they thought that the data might be called for by the authorities.

Perhaps there will be unexpected 'issues' meaning that such data will have been lost in this case.


I can guarantee you (at least one) other major tech companies have the same system in place. Companies that operate in multiple jurisdictions don’t want all of their assets compromised because of a raid in one city/state/country.

E.g. Brazil was at odds with WhatsApp over their message encryption — Brazil wanted WhatsApp to stop encrypting messages so that it would be possible to subpoena chat logs. If Brazilian police raided a hypothetical WhatsApp Brazil office, would it not be prudent for Facebook to cut off data access from that office? Especially given that the nature of technology means that once police have an employee’s laptop and can compel them to enter their password, they have access to essentially all of the company’s data worldwide.


That kind of thing might fly in some arenas, but judges have literally no sense of humor when evidence is “lost” in a criminal case, or civil wrongful death suit. That’s more or less a quick way to lose in the worst possible way.


Judge - you don't understand - the office that's responsible for this remote deletio^w securing of data is in another jurisdiction (Canada!). So Uber USA is absolved[1].

1. Uber really had a tool that did this IRL: https://www.theverge.com/2018/1/11/16878284/uber-secret-tool...


As I said, context matters. In the context of a death, this kind of thing is doom. When someone is killed, there is also a lot of cooperation across jurisdictions.


I agree completely. One thing that is important to me is that the whole self-driving field will learn from every mistake. In other words, every self-driving car should only make the same mistake once, whereas with humans, each human has to learn only from their own mistakes.


If there's one company who hasn't demonstrated it's learned from its mistakes it's Uber.

But let's extrapolate. Say one day there are 20 self driving car companies. Should they be required to share what they learn so the same mistakes aren't repeated by each company or does the competitive advantage outweigh the public benefit from this type of information sharing?


Nice case for a "Reductio ad Absurdum":

Should "not killing pedestrians" be a competitive advantage? No. It should be mandatory for every vehicle.

Therefore,we should make sure that everything works towards this goal, including sharing data!


Federal regulation demanding shared accident databases wouldn't be the worst thing...


What mistakes has Uber not learned from recently? They changed their leadership. That shows some learning.


"recently" is a pretty big caveat to throw in there. Historically Uber hasn't changed any of their behaviour until they've been caught doing something. Let's wait and see how they react to something like this.


Good point, but I added that in because I think recent is the most important for trying to know if something learns from its mistakes.

They seem to be adjusting quite a bit in the last 1-2 years so that is the definition of learning. Otherwise, just showing how companies were once dumb is usually not very useful in knowing how adaptable they are.

I agree that we’ll know better looking back how they react to this event.


I think there is a case for open sourcing the decision making code for this exact reason.


Or does the prospect of onerous regulation outweigh the competitive advantage?


Airlines are the same way. Every time there’s been a crash they’ve learned something and changed procedures to prevent it again. It’s made flying pretty safe, unless you’re an animal flying on United...


Uber's algorithms might learn from this. None of the other self driving car companies will.


I disagree. I would be astounded if Waymo, Tesla, and every other player in the field isn't already trying to craft 1000 new test cases about this incident. It's now the second(?) known death involving this technology. The first question their bosses will be asking them in their next meeting would be "would our cars have done this, too?"


That’s where the ntsb steps in and regulations created.


I'm having a hard time imagining that kind of regulation of ML algorithms.


Making sharing of all sensor data of an accident and the preceding couple of minutes mandatory could be a simple regulation that would be very helpful for all companies trying to improve their algorithms.


That isn't correct. Culture is learning from other people's mistakes. The only question is whether the human can and is willing to accept the lessons (and whether the lessons are correct).


Culture is comprised of many things, and some similar things, but not that. By definition, culture is merely a collective persistence, and therefor inherently superficial. Culture cannot capacitate “learning” or even lessons. Culture is limited to memetic abstractions, with a lot of alleged noise. Insofar as culture supports persistence of worthwhile norms, it would be limited to evolutionary adaptation (survival of the fittest). It’s important to distinguish to avoid the fallacy of argumentum ad populum.


> Sounds like Uber's platform may not be up to that standard yet, which makes me wonder why they're on public roads.

Who exactly do you think is granting Google, Uber, etc. approval for trials like this on public roads? It's going to be some bureocrat with zero ability to gauge what sort of safety standard these companies' projects have reached.

There are no standards here... what were you expecting would happen?


Not bad even a self driving car can stop in time for deer jumping out from behind tree/bush cover.

People still need to look both ways when coming out from behind cars.

I want more details.


That's a really good point that many (myself included) aren't mentioning. Stopping distance is a universal, reaction times be damned. Would be curious to see if that played a part.


Sometimes the best reaction in this situation is not to stop, but to swerve around the obstacle, perhaps even accelerating slightly to do so quickly. Deer can be pretty unpredictable here, but most on foot humans will not put themselves into the new path of your vehicle.


Perhaps (self driving) cars can be redesigned to mitigate the outcome of pedestrian collisions, eg, like cowcatchers on trains [1], except something more appropriate for the situation. External airbags?

[1] https://en.wikipedia.org/wiki/Pilot_(locomotive)


Here's a times article you reminded me of that mentions Saab engineers putting on crash helmets and running the 9-5 into a concrete moose.

http://www.nytimes.com/1998/04/05/automobiles/behind-the-whe...


> Sounds like Uber's platform may not be up to that standard yet, which makes me wonder why they're on public roads.

Uber cutting corners and playing fast and loose with legislation? Unheard of!

Here's hoping they get hit with a massive, massive wrongful death lawsuit.


I'd also add the goal of self-driving cars is to decrease costs for ride-sharing, home-delivery companies, etc, and also to decrease congestion via coordination amongst autonomous vehicles.


It's an interesting dynamic. We want this tech to be much better than us terrible humans before we deploy it and anything like us terrible humans is not acceptable.


The first martyr of the AI age, I suppose.


> That's a very bad look; the whole point of self-driving cars is that they can react to unexpected circumstances much more quickly than a human operator, such as someone walking out into the road. Sounds like Uber's platform may not be up to that standard yet, which makes me wonder why they're on public roads.

That's an invalid conclusion to draw from this accident.

There were 34,439 fatal motor vehicle crashes in the United States in 2016 in which 37,461 deaths occurred. This resulted in 11.6 deaths per 100,000 people and 1.16 deaths per 100 million miles traveled.[1] As far as I know, the instance this article is about is the first death in an autonomous vehicle accident, meaning that all the 2016 accidents were humans driving.

Why is it that you see one death from an autonomous car and conclude that autonomous cars aren't ready to be driving, but you see 37,461 deaths from human drivers and don't conclude that humans aren't ready to be driving?

I admit that there just aren't enough autonomous cars on the road to prove conclusively that autonomous cars are safer than human-operated cars at this point. But there's absolutely no statistical evidence I know of that indicates the opposite.

[1] http://www.iihs.org/iihs/topics/t/general-statistics/fatalit...

EDIT: Let's be clear here: I'm not saying autonomous cars are safer than human drivers. I'm saying that one can't simply look at one death caused by an autonomous car and conclude that autonomous cars are less safe.


As of late 2017, Waymo/Google reported having driven a total of 4 million miles on roads. Given that Waymo is one of the biggest players, it's hard to see how all of autonomous cars have driven 100 million miles at this point.

https://techcrunch.com/2017/11/27/waymo-racks-up-4-million-s...

Nevermind that the initial rollouts almost always involve locales in which driving conditions are relatively safe (e.g. nice weather and infrastructure). Nevermind that the vast majority of autonomous testing so far has involved the presence of a human driver, and that there have been hundreds of disengagements:

https://www.theverge.com/2018/1/31/16956902/california-dmv-s...

Humans may be terrible at driving, but the stats are far from being in favor of autonomous vehicles. Which is to be expected given the early stage of the tech.


My point is that "the stats" don't exist if you're only looking at one autonomous car accident without comparing it to the relevant statistics from human accidents.


If the stats don't exist then what was your whole basis for making an argument? On what grounds do you have to argue that we shouldn't be pessimistic about the state of autonomous driving? That you pulled out the IIHS stats without apparently bothering to look up the number of reported self-driving miles driven seems to suggest that you assumed that the fatality rate was obviously skewed against human driving.

That's fine, we don't have to consider just fatal accidents:

https://www.theverge.com/2018/1/31/16956902/california-dmv-s...

> One result of the sharp increase in GM’s number of miles driven is a plethora of accidents. The auto giant’s autonomous cars were involved in 22 fender benders over the course of the reporting period (and two more in 2018). That’s one crash for every 5,985 miles of testing.


> On what grounds do you have to argue that we shouldn't be pessimistic about the state of autonomous driving?

I didn't say that.

I'm not saying autonomous cars are safer, I'm saying that this accident doesn't prove autonomous cars are less safe.

> That you pulled out the IIHS stats without apparently bothering to look up the number of reported self-driving miles driven seems to suggest that you assumed that the fatality rate was obviously skewed against human driving.

I brought up the IIHS stats to show that merely reporting an accident in isolation from a full statistical model doesn't prove anything.


OK. And the person you responded to with the fatality stats said nothing about proving which way either:

> That's a very bad look; the whole point of self-driving cars is that they can react to unexpected circumstances much more quickly than a human operator, such as when someone walks out into the road. Sounds like Uber's platform may not be up to that standard yet, which makes me wonder why they're on public roads.


I'm not sure how you can interpret what you're quoting as saying anything else.


The commenter said it's a "very bad look". Followed by this:

> On the other hand, it sounds like it happened very recently; I guess we'll have to wait and see what happened.

How in the world do you see that as an assertion of proof.


He's arguing from foundations. The hypothesis is that autonomous drivers should perform better at task X than the null hypothesis (aka human drivers). So any instances where autonomous drivers do not seem to perform better are all potential counter-arguments to that hypothesis.

The fact that human drivers aren't particularly good isn't really relevant, beyond setting a correspondingly low bar within the null hypothesis.

This all ties to regulation allowing these vehicles to drive on public roads, because that regulation was permitted due to the above hypothesis and hopeful expectations that it would be true.

Obviously, I haven't seen the entirety of the data set to know fatalities per car-mile. Which would be the relevant statistic here. I also didn't see such a number in your post, which I'm assuming means you are probably not aware either. But simply providing the numbers for the null hypothesis doesn't do anything.


[flagged]


You'll have to make an attempt, because I don't see anything contradictory. If human drivers are your null hypothesis, than you cannot use the fact that "humans are bad" as a blanket acceptance of autonomous vehicles. They are already built into the equation, by virtue of being the null hypothesis. So you can argue that hypothesis, but that's not what your comment did. Your comment stated some numbers for the null hypothesis, but had no numbers for the hypothesis under test, and so therefore didn't really mean much at all.

I'd like to note that I am not the person you replied to, and I personally am not arguing that any program should be shut down based on this one incident. But it's certainly not encouraging that the autonomous vehicle seems to have failed a test at which everyone would have expected it to perform well at.

Also, to quote from your previous post:

> Why is it that you see one death from an autonomous car and conclude that autonomous cars aren't ready to be driving, but you see 37,461 deaths from human drivers and don't conclude that humans aren't ready to be driving?

I think we conclude, quite often in fact, that individual humans aren't fit to be driving. Death is one of those scenarios that will quickly lead to such a conclusion.

One huge difference between individual humans and autonomous vehicles is that we can reasonably argue that any Uber vehicle would have performed the same in this scenario. So this is perhaps more akin to saying that this particular driver is not fit for the task, except that this particular driver happens to be driving dozens or more vehicles all at once.


> You'll have to make an attempt, because I don't see anything contradictory. If human drivers are your null hypothesis, than you cannot use the fact that "humans are bad" as a blanket acceptance of autonomous vehicles.

I said, "I admit that there just aren't enough autonomous cars on the road to prove conclusively that autonomous cars are safer than human-operated cars at this point." and I've vocally criticized autonomous cars elsewhere, so I'm not sure where you get the idea that I favor a blanket acceptance of autonomous vehicles.

I'm not saying that autonomous vehicles are safer, I'm saying this isn't evidence that autonomous vehicles are less safe.

> One huge difference between individual humans and autonomous vehicles is that we can reasonably argue that any Uber vehicle would have performed the same in this scenario.

I disagree: you can argue that the autonomous vehicles will behave the same given the same inputs, but they will never have exactly the same inputs even if the situation were identical to a human observer, so that's a fairly moot point. If you step back to a larger description of the situation (car crossing bike lane to get into a turn lane, car doesn't identify and avoid bicyclist in bike lane) then you are going to be looking at a percentage of the time where autonomous car will make a mistake. There's also a percentage of the time where a human driver will make the same mistake. The only way you can compare the safety autonomous cars to human drivers in this situation is to compare those percentages. And that's ignoring the fact that there are thousands of other situations in driving--even if autonomous cars fail 100% of the time in this situation, there may be enough other situations where they perform better enough than human drivers that they're safer. Simply saying that a car made a mistake in this situation doesn't give us any information at all.


> I'm not saying that autonomous vehicles are safer, I'm saying this isn't evidence that autonomous vehicles are less safe.

> And that's ignoring the fact that there are thousands of other situations in driving--even if autonomous cars fail 100% of the time in this situation, there may be enough other situations where they perform better enough than human drivers that they're safer. Simply saying that a car made a mistake in this situation doesn't give us any information at all.

But it does give us information. This incident, along with all the other incidents and non-incidents that do or do not occur, as measured by incidents per car-mile. One cannot simply wish away this one incident and make it disappear. It is now forever part of the statistics which will either prove or disprove the hypothesis of autonomous cars being safer.

> I disagree: you can argue that the autonomous vehicles will behave the same given the same inputs, but they will never have exactly the same inputs even if the situation were identical to a human observer, so that's a fairly moot point.

It's not a moot point. Yes, this exact scenario with these exact parameters only occurred once. However, we can still reasonably argue that if we reversed time and replaced that exact Uber autonomous driver with another instance of the autonomous Uber driver and turned time back on, it would have reacted exactly the same. The same way that I expect the same version of Notepad to open my text file exactly the same on this computer as on another computer. The alternative being that there is some undeterministic behavior in the driver that is not tied to input... In which case, good luck with that in court.

However, we cannot make that same argument by replacing human drivers. Because each human is, in fact, different.

This is only important in the context that a fanciful revocation of Uber's "autonomous driver's license" would apply to all instances of the autonomous driver, since they would all have been reasonably expected to perform the same.


> But it does give us information. This incident, along with all the other incidents and non-incidents that do or do not occur, as measured by incidents per car-mile. One cannot simply wish away this one incident and make it disappear. It is now forever part of the statistics which will either prove or disprove the hypothesis of autonomous cars being safer.

Can you point out the part of the article or the post that I was responding to which mentions how many car-miles were traveled?

This is exactly what I'm pointing out.

> The same way that I expect the same version of Notepad to open my text file exactly the same on this computer as on another computer.

Notepad doesn't have to read your text file through a lens with slightly different focus, viewing area, and patterns of dust on it each time.

> The alternative being that there is some undeterministic behavior in the driver that is not tied to input...

The undeterministic behavior is that the hardware which collects the input will never be the same. I don't know whether the software is non-deterministic (it wouldn't surprise me) but I know the hardware is never going to be identical--hardware is always made to tolerances and always has some degree of variability.

Your claim is tantamount to saying that if we put the same person in the same situation but with two different sets of eyes, the eyes would have no effect on the results.

> However, we cannot make that same argument by replacing human drivers. Because each human is, in fact, different.

Autonomous cars are, in fact, different. Just because they're running the same software doesn't mean they're the same; even if the software is completely deterministic, software is only a component of the autonomous driver.


> Can you point out the part of the article or the post that I was responding to which mentions how many car-miles were traveled?

> This is exactly what I'm pointing out.

I think, if your intent is to show that this is incomplete information, that...

1) No one is arguing that.

2) You have not done a great job of attempting to relay that, given phrases like, "Simply saying that a car made a mistake in this situation doesn't give us any information at all."

3) Sometimes that doesn't matter. For instance, Florida law is a mandatory 6 months to one year license revocation on DUI, regardless of the circumstances or information.

> Notepad doesn't have to read your text file through a lens with slightly different focus, viewing area, and patterns of dust on it each time.

Difficulty of the task is unrelated to expected outcomes of the task given the same inputs. And we already tread the topic of duplicating the exact situation... Not sure what you're trying to gain through this line of argument.

> Your claim is tantamount to saying that if we put the same person in the same situation but with two different sets of eyes, the eyes would have no effect on the results.

I am making no such claim, and I cannot believe that you are so adamant about not understanding my actual claim. This is a hypothetical situation. There is no mention in this scenario about changing the car, including any of the sensor hardware. I am interested in replacing only the driver (or driver software) into the exact same circumstance.

(EDIT: OK, reading back, I did say "any Uber vehicle". While your point stands, I think it's a very uncharitable reading. If hardware sensor tolerances and specs of dust on the camera are going to determine whether a life is lost or not, either those tolerances need to be driven down or this entire idea needs to be rethought. After all, we don't allow those who are legally blind to drive unless they have corrective lenses...)

Assuming the software is deterministic [1], by definition given the same inputs it will result in the same output. Therefore, "replacing" the autonomous driver with another would have resulted in the same incident. You cannot say that with any measure of confidence for any two pairs of human drivers.

[1] Which seems like it would be a good thing to assume, since I don't think one would get much traction by arguing that we should be putting vehicles with non-deterministic behavior on the road...


No human driver will ever receive the same input twice, but we still suspend people's licenses sometimes after a single incident. Are you arguing that we need to let Uber kill a few more pedestrians so we can more accurately determine the safety of their platform vis a vis human drivers? Why can't they fabricate some tests that demonstrate their safety in a controlled environment first?


> Are you arguing that we need to let Uber kill a few more pedestrians so we can more accurately determine the safety of their platform vis a vis human drivers?

Are you making accusations in question form so you don't have to back them up? I certainly didn't say that.

> Why can't they fabricate some tests that demonstrate their safety in a controlled environment first?

Are you assuming they haven't done this?


I'll admit that I'm of the GP's mind in wanting to know what you consider to be "evidence". Because you keep reiterating this sentiment:

> I'm not saying that autonomous vehicles are safer, I'm saying this isn't evidence that autonomous vehicles are less safe.

It's true that we need to wait for more information about this particular incident -- which is exactly the caveat that the commenter you initially responded to had said [0]. But assuming the facts aren't abnormally different than what they seem to be -- an Uber AV hit and killed a jaywalker, how is that not evidence toward the argument that AVs are less safe? It obviously isn't conclusive evidence. But if Uber goes on to kill a pedestrian for every million miles driven, this first data point would surely be part of the empirical evidence, no?

[0] https://news.ycombinator.com/item?id=16620042


> Are you making accusations in question form so you don't have to back them up? I certainly didn't say that.

Nope, I'm just having trouble grokking your argument and I thought that might be it. It's true I added a rhetorical edge to the language that was probably unnecessary. I apologize for that--I'm not trying to put words in your mouth. What is your argument?


1 July 2016 "Tesla driver dies in first fatal autonomous car crash in US"

https://www.newscientist.com/article/2095740-tesla-driver-di...


you have to take into account miles driven. Yes, we have 37K fatalities, but trillions of miles driven. So, it comes out something like 1.2 fatalities per 100 million miles driven for human drivers. Which means, so far, self driving cars have a worse record per 100 million miles driven. So far.


I haven't been able to find any reliable source of miles driven by self-driving cars to make that claim, but if one exists it wouldn't surprise me.

But that's not really my point. My point isn't that self-driving cars are safer, it's that merely looking at one car accident doesn't prove that self-driving cars are less safe.


as of today, yes, the data indicates that self driving cars are less safe. The fact is, as of today, self driving cars are generating more fatalities per mile driven than human drivers. That may change in the future as technology matures.


I'd love to see the data you're talking about, but even if such data exists, that doesn't make extrapolating from this one accident any more valid.


https://medium.com/waymo/waymos-fleet-reaches-4-million-self...

https://www.forbes.com/sites/bizcarson/2017/12/22/ubers-self...

so, 6 million miles driven and one fatality is significantly worse record than human drivers. So far.


Actually there was a metric of accident per distance drived. It was often pull out by Tesla before their first fatal accident last year (IA didn’t saw rear of truck because of sun or smthg like that).

This metric was often decried because it had poor statistical significance. It would however be nice to update this metric according to this death.

Maybe now this metric would indicate that IA are more dangerous than human, will be interesting if the perception of this flawed metric will evolve in reaction...


> Actually there was a metric of accident per distance drived. It was often pull out by Tesla before their first fatal accident last year (IA didn’t saw rear of truck because of sun or smthg like that).

I'm curious to see that, do you have a link? That would actually be a valid comparison.


I’m on my phone so maybe I haven’t found the most relevants links, but I found the followings.

This is Elon musk bragging before the first autopilot accident : https://www.telegraph.co.uk/technology/2016/04/25/elon-musk-...

This is a counterpoint with some fact: http://safer-america.com/safe-self-driving-cars/

And concerning the crash, last update seemed to point driver’s fault... with autopilot... don’t know if final ruling is already out:

https://www.washingtonpost.com/news/the-switch/wp/2017/06/20...


According to that "update", Tesla Autopilot can never be at fault right? Because the driver is supposed to be in charge. So, even if Tesla rams full speed into a semi, the driver is at fault!


Yes, that's how Level 2 systems (like Tesla Autopilot) work.


Neat, thanks for the data.


If solution does not solve anything, what's the point of the „solution“? Most of the deaths on the road I hear in my country are the result of someone doing something really reckless and stupid. „Normal“ drivers do not kill themselves or others. So if self-driving car is only as good as a dumb driver — I do not want these cars on the road. Also, interetingly enough there is little talk about human driver assisted by technology, not replaced by it. For some reason it is binary: either humand driver or self-driving. How about some human drivers plus collsion avoidance system, infrared sensing system (way too many people die there simply becaue they walk on the road in the dark without any reflectors/lights), etc.


> If solution does not solve anything, what's the point of the „solution“? Most of the deaths on the road I hear in my country are the result of someone doing something really reckless and stupid. „Normal“ drivers do not kill themselves or others. So if self-driving car is only as good as a dumb driver — I do not want these cars on the road.

If self-driving cars are between dumb drivers and normal drivers, and self-driving cars take dumb drivers off the road, it may be a net positive.


Not if they also take normal drivers off the road...


That depends how many dumb drivers it takes off the road compared to normal drivers, and the relative percentages of accidents caused by each.

Let's say we have 9 normal drivers and 1 dumb driver. The 9 normal drivers cause 0 accidents per year each, and the 1 dumb driver causes 15 accidents per year. Let's say autonomous cars cause 1 accident per year. If autonomous cars replace all the drivers, there's 10 accidents per year instead of 15.

Obviously these are hypothetical numbers. All I'm saying is that if you only have one of these numbers, you don't know which kind of driver is safer.


I guess I'm just working on anecdotes and guesswork here, but I always assumed the vast majority of drivers are pretty reasonable most of the time. I imagine if this were not the case the clusterfuck on the roads would be considerably worse than what I see in practice. It is certainly true that the dumb drivers heavily outweigh the sane ones in my memories of traffic, but I also know the details of more plane crashes than successful flights.


Maybe simply because "normal" people cause accidents, it's not newsworthy enough for you to hear about it?


Apples and oranges.


> don't conclude that humans aren't ready to be driving?

I'm not sure I see anyone here making that conclusion. I think you're the only one who's brought it up.

For one, I personally sure as fuck do not think humans are ready to be driving.

> That's an invalid conclusion to draw from this accident.

The conclusion was not invalid at all. Other self-driving car companies have driven more miles than Uber has and they have done so safely. Uber has even taken its cars off the road, so even Uber agrees that their self-driving cars are not ready for the roads yet.

It is also important to take into account what Uber is like when it comes to safety and responsibility. They have knowingly hired convicted rapists for drivers, they have ridiculed and mocked and attempted to discredit their paying customers who have been raped by their employees/drivers, they have spied on journalists, they have completely disregarded human safety on numerous accounts. A company with a track record like Uber's probably not should be granted a license to experiment with technology like these self-driving cars on public roads.

They just aren't responsible enough.


> I'm not sure I see anyone here making that conclusion. I think you're the only one who's brought it up.

Look at the post I'm responding to, and the section I quoted.

> Uber has even taken its cars off the road, so even Uber agrees that their self-driving cars are not ready for the roads yet.

Uber avoiding a PR nightmare should not be taken as Uber thinking their self-driving cars aren't ready.

> It is also important to take into account what Uber is like when it comes to safety and responsibility. They have knowingly hired convicted rapists for drivers, they have ridiculed and mocked and attempted to discredit their paying customers who have been raped by their employees/drivers, they have spied on journalists, they have completely disregarded human safety on numerous accounts. A company with a track record like Uber's probably not should be granted a license to experiment with technology like these self-driving cars on public roads.

I'm not defending Uber as a whole. I think in general they're a fairly typical rent-seeking middle man whose only real innovation has been figuring out a way to break the law ambiguously enough to get away with it (admittedly, a law I don't agree with). I don't even necessarily think autonomous vehicles are a good thing. I just think that if you're going to criticize autonomous vehicles, safety isn't the criticism I'd level against them, because it's not backed up by the evidence when compared to how unsafe human drivers are.


They will have video for sure if since they are testing, then we will see. Your statement about reaction time is assuming so many things, may as well declare the AI guilty now right?


It's not "intelligent", it's a dumb computer that is programmed by humans. The people responsible for putting it on the road must be liable.


It's not intelligent but it's also not programmed by humans in the sense that someone painstakingly put explicit logic for the possible cases.


It is programmed by humans. Machine learning algorithms are designed and implemented by humans. If machine learning algorithms do not perform well in a way that puts human lives at risk, the humans who designed and implemented them are responsible.


nope, its trained as a neural network. Nobody ever sits down and programs in 'how to handle an intersection with a bike lane' other than putting that sort of driving experience into its training data. It may have modules designed for general scenarios (i.e. one module is great at freeway driving, another is great at intersections etc) but still trained by experience.


It's programmed by humans. Get over it. It's not intelligent, it's not "learning" it's following instructions, saving data, and iterating. Your comment is obvious to the point of meaninglessness.

This tendency to buzzword out of obvious truths (NN's are programs for example) derails rational discussion.


This comment breaks the HN guidelines by being uncivil as well as by calling names in the sense described here:

https://news.ycombinator.com/newsguidelines.html

Doing these things derails discussion much faster than an obvious comment, so would you please (re-)read the rules and abide by the spirit of this site when posting here?


Thanks, I appreciate the check. Sorry about that jobigoud.


> may as well declare the AI guilty now right?

AI (short of AGI) is never guilty; it's creators and operators, OTOH...


If you're familiar with that part of ASU campus you honestly could have seen this coming. There are a few different self driving cars in Tempe (Waymo, Uber, GM...) and Uber drives by far the most aggressive. They drive on a pedestrian heavy road and drive faster than most of traffic. They accelerate rapidly when a light changes and can brake hard. There are always tons of pedestrians in the area and it isn't uncommon to almost get run offer when even crossing during the day.


I wonder if it's because Uber algorithms are trained using Uber driver data (taxi drivers tend to be very aggressive drivers).


I don't find that at all, I feel like they're happy to take their time and make more money.


Uber drivers don't get paid by the minute. It's a fixed fare.


No, drivers are paid by distance and mileage. Uber is paid fixed fare.


Depends on where you live. Uber has a per-minute rate in some places.


This matches my experience driving elsewhere. You need to adapt to the context, the car should not drive the same way everywhere.


Obviously more information is needed, but I thought the entire point of having a driver behind the wheel is to manually intervene to prevent this very situation?

I'm very curious to see how they'll investigate this and who will be determined to be at fault (person behind the wheel or Uber). It will likely set a precedent.


Human beings simply cannot switch between "not focussed" and "in charge of a car, taken out of autonomous mode, and actively avoiding collision" fast enough to avoid most accidents, unfortunately. Neither can humans maintain the focus required to be ready to do that when 99.9% of the time they're not required to do anything.

Semi-autonomous cars have drawbacks.


I agree with what you are saying, and that it would be very dangerous to sell cars where drivers may be lulled into complacency of thinking they don't need to pay attention to the road, until self driving cars get good enough that human attention really isn't needed.

However, these are test vehicles. The driver's full-time job is to be focused on the road and what the car is doing. They shouldn't have any misconceptions about the need for them to stay focused on the road. Now granted, even then attention will lapse occasionally, just like it does when you are actually driving. But I don't think that being physically in control of the wheel is necessary for maintaining focus at the same level as a good driver. Driver's Ed instructors need this skill. Also some back seat drivers I know are quite good at maintaining strong focus on the road regardless of who is driving :)


It's not about misconceptions, humans just aren't capable of it. Maintaining concentration when you're actively taking part in a task is doable, but maintaining concentration when you haven't had to do _anything_ for the past two hours is not. It isn't about whether the driver believes they should be paying attention or not, it's just not how human attention works.


> Human beings simply cannot switch between "not focussed" and "in charge of a car

This is not how many people drive in non-autonomous cars? Commuting is on auto-pilot for many of us; you don't really remember how you got to work and I see people tying their shoelaces, calling, eating sandwiches, reading the paper, chatting on whatsapp all the time. Sure it's all illegal (in a lot of countries), but people are bored and drove that road 1000x.


> This is not how many people drive in non-autonomous cars?

No, that is not how people drive in non-autonomous cars. The phenomenon you're referring to is related to how sparsely information is stored to memory when doing a routine task (even when you were fully concentrating). Just because you can't recall an activity in detail does not mean you were not paying full attention.


Which is why the machine should be backup for the always-driving human, only leaping in to correct failings.


Exactly like what has been done successfully in aviation for decades now. For example Auto-GCAS. I don't understand why car companies are trying to go against a proven model.


For a couple years now we've had auto braking and then more recently lane keep assist which is pretty much exactly the 'automated backup to human drivers' option. It's great but people still want more automated driving systems.


Because nobody is trying to remove pilots from the equation. Uber sees this as a teastbed for a future without human drivers.


The military is actively working to remove pilots from the equation, at least for certain missions. Eventually some of that technology will be spun off to civil air transport.


The other proven model is autonomous metro trains. They are fully automatic, and should the system fail can sometimes be controlled manually.

This is preferable, because a child, blind person etc needs to be able to use the autonomous vehicle alone.


That only works for vehicles on dedicated separate paths.


Modern aviation includes fully autonomous modes though. An airplanes autopilot is simpler than a car's autonomy, but they accomplish essentially the same goal: get you from point A to point B with no human interaction. AFAIK, even takeoff and landing can be done mostly autonomously on modern aircraft.

If anything, airplanes prove that machines doing most of the work and humans stepping in only when necessary is a proven model.


That's not at all the same thing. When commercial pilots engage the autopilot (or autoland) they're still actively flying the airplane, just operating at a higher level of abstraction to decrease the workload and fatigue. They're not sitting back and playing Candy Crush on their smartphones.

https://www.usatoday.com/story/travel/columnist/cox/2014/08/...


> If anything, airplanes prove that machines doing most of the work and humans stepping in only when necessary is a proven model.

Airplanes have far less traffic to deal with. And the points at which they deal with traffic (e.g. takeoff and landings) is completely controlled by humans, including many humans outside of the plane.


This is incorrect. There are at least automatic landing systems that are sometimes used.


So what are the conditions in which auto landing is not used?


Apparently pilots generally prefer to not use them. Not because they don't work but because pilots still need to be on alert and so its easier to just land manually.

They're used in low visibility conditions with relatively calm weather, but don't work well in bad weather.


A significant number of air accidents have resulted from unintended interaction of the autopilot and the pilot. Usually through some level of confusion about whether the autopilot is engaged or not (and 'how engaged', aircraft autopilots can have a complicated array of modes). There's a lot of learning in autonomous system to human interface design embodied in modern aircraft autopilots.


When an unusual situation causing the autopilot to disengage happens in the air, you often have minutes of time to deal with and correct the issue, almost never less than 5-10 seconds. In a car, you're lucky if you have even a single second, and that's just not enough time to take over.


Automation for airplanes is really so much simpler, a lot of that thanks to a lot of hidden human effort in keeping planes well separated so pretty much all it has to do to fly is keep course and control. If that was all we had to do for cars it'd be pretty simple.


Please no. I don't want to be killed by my car's software.



Your car's software already has the capability to kill you. Things like automatic braking are much more likely to save your life than to endanger it. To say otherwise is just paranoid scaremongering.


This breaks the site guideline against calling names in comments. Could you please not do that? Your comment would be much better without the last sentence.

https://news.ycombinator.com/newsguidelines.html


notallcars


Certainly, humans will never be perfect but there is the wicked problem of how we interact with systems that behave correctly 99% of the time but require human attention the other 1%. There was an excellent Econ Talk podcast about this topic a while ago:

http://www.econtalk.org/archives/2015/11/david_mindell_o.htm...

However, in my opinion, the driver and by extension his employer Uber must be held responsible in this case. Uber have a license to test their self-driving cars only if there is a human driver in control at all times. Regardless of the crazy things their software decides to do, the human is there as a person of ultimate legal liability.


How about autonomous on the highway, turnpikes, etc. And human-operated on streets where a car might ever need to yield to pedestrians or cyclists outside of red-lights.


> I'm very curious to see how they'll investigate this and who will be determined to be at fault (person behind the wheel or Uber). It will likely set a precedent.

It's also plausible that the pedestrian was at fault. Under some circumstances it would be impossible for a human driver to avoid collision with a pedestrian, this may be no different.


Motor vehicle occupants are much more likely to survive than pedestrians and cyclists are. But blaming victims (especially dead ones) has a long and storied history as far as law enforcement and the road.

Hopefully with self driving vehicles we can start to move beyond this attitude, especially when there should be ample camera footage available to help dispel the usual claims that the person struck "came out of nowhere" or whatever.


The only way this is a reasonable failure of both the computer and the human driver is if they both physically had no time to react. Maybe that was the case, maybe it wasn't.


Given these cars have been blowing red lights, the human drivers aren't doing their jobs and overriding the car.


It could have been the fault of the pedestrian. I'm not saying it is, just saying there is a third possibility.


If humans can't prevent themselves from having accidents 100% of the time when they're in total control of the vehicle, why should we expect they could prevent an accident 100% of the time when they're in partial control of the vehicle?


" Obviously more information is needed, but I thought the entire point of having a driver behind the wheel is to manually intervene to prevent this very situation?"

I think it's time to point out the obvious, and require that autonomous cars apply the brakes first, and THEN require driver intervention.

And that they be a whole lot quicker to err on the side of braking.

Cameras getting fuzzy ?

Slow down.

Your ML algorithms are showing lower confidence measures for how they classify nearby objects and trajectories ?

Slow down.

nearby vehicles slowing down and you don't know why?

Slow down.

This is inexcusable.


Nothing I love more than HackerNews armchair engineering.

It's not that simple, you're assuming the car even had some indication that something was wrong. For all we know the car's vision was showing high confidence it saw an open road.


Car rapidly approaches you from behind.

Slow down?


Concioussness requires a body. This is how it is, here.


Here we go. We'll now have the first traffic fatality trial where it's not drivers-trying-drivers but people-trying-a-megacorp.

(Disclaimer: I'm a bike advocate, so I may have a different perspective on some of this than most.)

Our car-based transportation system is far and away the most dangerous thing any of us accept doing on a daily basis. 40,000 die a year.

But when cases come to court, everyone on the jury has in the back of their mind "that could have been me if I lost concentration at the wrong moment, or made one bad judgement, etc etc."

So penalties are comparatively light for traffic fatalities. Big punishments are only meted out if the case is so egregious -- repeated drug use, flagrantly reckless behavior -- that the jury can be convinced that the driver is different from them.

In other words, drivers don't get punished for doing something dangerous, because everybody on the road is doing something dangerous. They get punished for doing something more dangerous than the norm.

In this case, there's no question that the "driver" is different than the jury -- it's a computer. Now the symmetry that made jurors compare themselves to the accused is broken.

The result, and what self-driving car advocates don't get, is that self-driving cars don't just have to be safer than human drivers to be free of liability, they need to be safe period. In a trial, they don't benefit from the default "could have been me" defense.

That's a HUGE requirement. In fact, it's probably impossible with our current road system. It won't just take better self-driving cars, but better roads and a major cultural change in our attitudes about driving.

As a bike advocate, I welcome this shift, but I also see how deluded many of the current self-driving projects are. Software moves fast, but asphalt and mentalities move slow. We're not years away from a self-driving transportation system, we're decades.

And this trial is just the beginning of that long story.


Most drivers avoid the most serious penalties, and more importantly most victims are denied restitution, not because of sympathetic juries but because of insolvency.

The chance someone will cause a grave accident and the chance someone is insolvent are not independent variables.

You get some real horror stories in law school about people trapped in burning vehicles with no remedy for the surviving family.

https://en.m.wikipedia.org/wiki/World-Wide_Volkswagen_Corp._...

One way you can solve for this is strict products liability. Normally we ask if there's a defect in design or negligence. That produces costly litigation where a lay panel reviews a bunch of technical engineering documents with no special training, with highly varying outcomes. Instead, just have the manufacturer pay some statutory compensation whenever their product causes harm. The social harm of the thing will be borne by its purchasers.

It's hard to do that with car accidents though, because driver error contributes so much to outcomes, it's often weird to punish the company.

But we could vastly simplify the auto insurance system and improve the safety of these vehicles by forcing them to just warranty against harm once drivers are out as a factor. (Insofar as a warranty is a promise not to do harm with a fixed penalty up front based on breach.)

It would be fitting if such a statutory regime were the earliest primary laws about AI, because that would officially make a requirement that ai does not harm humans into literally the first law of robotics.


>Here we go. We'll now have the first traffic fatality trial where it's not drivers-trying-drivers but people-trying-a-megacorp.

On even the slimest cause, plaintiff's council always tries include the manufacturer in the case. Anyone with deep pockets that they can pull in.


Moreover, "Wrongful death settlements are often paid out by insurance providers who provide liability coverage for the person or entity for whom the death is being blamed. Insurance policies typically have a policy limit amount, above which the insurance company will not pay and the person is individually liable" from https://www.google.com/url?sa=t&source=web&rct=j&url=https:/...

Although, as far as I know, were the jury to become aware of this, a mistrial should be declared.


I agree. People wish not to avoid death, but to lessen their fear of death.

Arguments about how relatively safe self-driving cars will be are beside the point. A self-driving car is something that can kill you. The fact that death by self-driving vehicle is especially rare makes it especially scary.

A low accident rate is not a good thing, psychologically. Only when a risk is comfortably common can we comfortably begin to ignore it.


There's a current trend for people to anthropomorphise cars, with the backing of the industry (two eyes/lights and a mount/grill on the front of "him/her"). Won't the industry go into overdrive on the anthropomorphism front to try and preserve this judicial empathy?


People can't be cars, so, no.


I'd be interested in hearing more about why you are a bike advocate?


It's a sad reality, but regardless of how many times per day a pedestrian is hit by a human-driven car, such incidents involving self-driving cars will be headline news for years to come.


Having a hard time sympathizing here. It's one thing to fear air travel based on the very few but very publicized plane incidents (considering all the data we have on the safety of air travel). It's another thing to hold these self-driving car companies accountable, considering a) the lack of data and history of such programs, and b) their touted benefit as a safer alternative to human-driven cars.


It's not about sympathy. It's about one question: do they kill fewer or more people than humans per mile?

If they kill fewer people, they can be run by a joint venture of Satan and the Mafia for all I care.


If the miles are not enough, they kill less until they first kill someone. At which point they kill several orders of magnitude more. # of events per something are not always a statistically valid way to measure things.


Well, the miles are enough...


Only if you aggregate them across companies and ignore the fact that the distribution those miles are not comparable to to distribution of human driven miles.


Do you really think there's too little data by now, or are you just making a technical point that isn't actually relevant to this specific situation?


As far as I know Uber has done pretty few miles, compared for example to Alphabet, so it is relevant.


What if self driving cars kill less people, but the type of people they kill are different from the type of people who die in human driving accidents?

For example, what if instead of 100 people per day dying from human-driven car accidents(where 95 of them are car drivers/passengers, and the other 5 are pedestrians/bicyclists), self-driving cars only kill 30 people per day, but 28 out of 30 are pedestrians/bicyclists?


From the comments above, this question is answered. Uber is around 25x worse than humans.


The entire potential of self-driving cars shouldn't be dismissed based on accidents like this - that would be an unfortunate side effect of these headlines. But I also reckon this coverage actually incentivizes better and safer programs. It's such early days, we simply need more data to form sound opinions. In the meantime, journalism is serving an important role here.


So far: more, by a factor of 20-25.


If human behavior on the street changes because of the fear of autonomous cars, life on the streets and the streets themselves will be much shittier.

I can see banning autonomous vehicles in city centers.


Wow. This is dark but it feels real to me.

I had been assuming that self driving cars would be a 'reset' for our decision to value cars more than the countless lives they end. Self driving cars could take the irrationality and emotion out of driving. It would be America's "stop the murder of children" moment (The Netherlands rallying cry to reform auto traffic in 1970's).

But now I see that it can, and probably will go the other way. These giant companies could push to further restrict the rights of cyclists/pedestrians around roadways, vilify the 'stupid' people who made a reckless decision that resulted in an accident, and push our country even further into car culture.


As they should: humans fail independently, but autonomous cars can fail systemically, including hacking. Bicycles are known to be tricky for autonomous cars.


Even if its provable that self driving cars are 10x safer, every accident of this unfortunate kind will be used against autonomous cars. I don't see how to overcome this -- I don't expect reason will overcome the perceived danger.


You're goddamn blind. Yes, of course it matters more if I'm killed by an AI machine than it does if I'm killed by another human being. This is a new development, humans killing humans are not. What if it was autonomous warfare? It is surely coming about in a very near shipment, SOON. The world and history are not static, and techno-utopia is big fat lie, and the idea that things just get better as time advances frontwards is plain infantile. Reality is way more fucked up than you can imagine.


> You're goddamn blind.

We ban accounts that do this. Please read https://news.ycombinator.com/newsguidelines.html and follow the rules when posting if you want to keep commenting here.


There may be no company I trust less to be open and honest about this than Uber. The arms race between companies will likely keep the most important lessons from this event proprietary. I'm very much not looking forward to hearing execs answer for this as I'm sure it will have the same amount of humanity as the rest of their comms.


This is my thought as well. Compared with Waymo, Cruise/GM, Tesla, Lyft, and the many other start-ups in this space, I think Uber is the least ethically scrupulous. I also doubt their self driving tech is as advanced as many of the other players, and I wonder if those other companies would have been able to handle this situation.


It's fascinating that so many brilliant people in this thread can talk so casually about any single loss of life for the tech built by the very same people in this thread. I would be devastated if I learned my code ended even one person's life.

You're in a bubble if you think a world hyper connected to social media that's in the same world scared that autonomy is about to kill millions of jobs in one jab and would NOT over-react to a single accident.

From the article it's impossible to know if the cyclist was at fault, so I won't jump to conclusions, but we can't, especially as the creators of the future of autonomous vehicles talk so dissociatively about human life. Either we go out into the public roads with very high certainty that OUR software and OUR hardware won't kill someone, or we wait till we get there.

Public opinion on human driving is doing just fine with human drivers right now, so we can and should take our time to get to the future we want or risk a backlash and a guilty conscious.


> Public opinion on human driving is doing just fine with human drivers right now

There were over 40,000 motor vehicle deaths in 2016 in the US [1] - more than a hundred daily. While loss of human life is always a tragedy, individual fatalities are absolutely worth it if we can begin to reduce that count sooner than by proceeding with excessive caution.

It would be morally unacceptable to delay development merely to avoid a guilty conscience.

[1] http://www.nsc.org/NewsDocuments/2017/Fatality-estimates-Jun...


And it’d be morally unacceptable to allow public deployment of software that has not been sufficiently tested. As other comments have pointed out, self-driving cars have undergone ridiculously little testing. In fact, based on only the objective statistics it is very unlikely that they are anywhere near as good at driving as humans.

There is no reason self-driving cars can’t be tested in private. Companies can hire pedestrians to interact with the cars, and the software can go through the same certification process that buildings and vehicles currently go through.

It’s a false dichotomy to say that you can either have self driving cars or minimally safe and accountable development, but not both.

EDIT: Here is a link to a thread replying to a parent which is now auto-collapsed. https://news.ycombinator.com/item?id=16620968


I'm just going to point out that nearly every time someone gets their learners permit, or graduated from a learners permit to a full license, an insufficiently tested driver is allowed on the road in order to develop more skills and become a better driver. Often they kill people in the process of learning. Should we require years of private training for every human driver as well?

They can't even share what they learn with each other effectively.


“Often they kill people”??? That’s not even intellectually honest. This is the fliplant attitude the top poster is responding to.


Our city (which is pretty small tbh, 150k) had 424 DUI arrests just this past weekend (Fri-Sun) due to St. Patrick's day. This is despite availability of uber, lyft, taxis, public transit, and bar services which will give you a ride home and allow you to park your car until morning.

Ignoring these facts is just as intellectually dishonest.


These cases are not what the parent comment was referring to.

The PP was insinuating that people with leraner's permit (and, by law, an experienced driver in the passenger seat) or people who just got a license (and hence were, literally, tested) are "insufficiently tested".

The PP was responding tho the claim that letting "insufficiently tested" systems on the road with the goal of letting them improve is irresponsible.

For the response to have any merit, you need to cite accident statistics for people with learner's permits, or new drivers.


To be clear, the testing for getting a learner's permit (where I live at least) is 7 out of 10 questions on a multiple choice test, and the test for a driver's license is a 20 minute drive-about where the driver gets to more or less choose the area they'll drive and the weather when the test is done.

I don't think it's even a very tough argument that these are at best basically limited filters on actual driver skill. I've known people who literally went to another city for favourable conditions for their driver test. I've known of people who passed having driven not much more than a few hours in their lives.

Nowadays if you want to be the driver in the passenger seat for a learner you need to do a somewhat more difficult test and be older.

Also, I'm really not talking about this specific case but I think it's particularly relevant that in this case there was a qualified driver able to take over for the autonomous vehicle, which is actually more supervision than a 14 year old with a learner's permit has.


Let’s look at the actual numbers. Here is a link to one such comment: https://news.ycombinator.com/item?id=16620968


Are you really scaling off a single data point? I feel like you'd also have to compare the types of driving. What is the death per 100 million miles on a city street? If you exclude highway miles I'd imagine it's much worse.


It is the only data we have. Ironically, one reason for the paucity of such data is that these companies have been so reluctant to make public their records—the reason this accident occurred in Arizona and not California is because Arizona has relaxed reporting requirements. And the commenter notes that extrapolating might not be wise.

Are you really asserting your statements without any fact whatsoever?


I did not assert anything other than that extrapolating a single data point doesn't really tell you anything and is pretty irrelevant.


I'm curious what your dispute is here. Unless you're reading something into it that I didn't say. I certainly didn't say most new drivers kill someone. But to say that this is not a thing that happens with a decent level of frequency is just silly. There is an actual reason insurance is higher for teenagers.

The post I'm responding to, from my perspective, has a flippant attitude towards the deaths caused by human drivers.


A learning driver on their first day has at least 16 years of experience with traffic (and life in general), and thus a fine model of which actions can map to injury and loss of life.


I'm sure there are some people who are attentive enough about traffic before they start studying for a learner's permit (at 14 most places, I believe, not 16) to be described that way, but many are not. Especially if you live and go to school in a suburban area your experience with traffic is probably largely a) crossing not very busy streets, b) riding a bike on, again, not very busy streets where traffic laws are honestly barely obeyed anyways, and c) getting on a bus or being driven everywhere, at which point maybe you pay attention or maybe you don't.

There's definitely going to be some osmosis but I think you're pretty vastly overstating it (and including several years in which your ability to even comprehend what traffic is is going to be severely limited).


That's not even remotely true. When I started learning to drive I had 0 experience with traffic. When I was fully licensed I had less than 2 years experience.


So you had literally never seen a car or crossed a road before?


The difference is we have a lot of experience with the human brain, and not much at all with self-driving cars. All we have to go on are the statistics, which say that although humans are bad drivers, self-driving cars are even worse (or at best that we don’t have sufficient information).

Again, why not just test them in private and hire people to be pedestrians?


> why not just test them in private and hire people to be pedestrians?

And get these employees to do what exactly? You obviously can't tell them to put their lives at risk to test what happens if they walk onto a car's path, or if they ride a bike on a car's blind spot wearing dark clothes at night, or if they fall from a motorcycle in front of a car after hitting a raccoon.

The dichotomy here is that private courses are inherently orderly and the real world is inherently chaotic.


That is exactly what those employees would have to be paid and consent to do—just like test drivers. In fact, someone else linked a Waymo blogpost saying that they have employees do exactly those things (walking into cars’ paths, lying down on skateboards...)

How is it not OK to have consenting people do these things but OK to have random people on the street participate in exactly the same tests?


People willingly operate unsafe machinery in 3rd world country factories but personally I don't think it's OK when they get crippled due to some accident or malfunction.

If all you're testing are scenarios that are known to be safe for the employee in question, then what exactly are you gaining from that testing?


Again:

> How is it not OK to have consenting people do these things but OK to have random people on the street participate in exactly the same tests?

Are you seriously saying Waymo’s (for example) testing practices are worse than just testing unsafe software in public?


It's not a matter of whether they are better or worse, it's a matter of whether these tests are sufficiently realistic.

Anyone with half a brain would hopefully quickly realize that you don't actually need a live person to lay down on a skateboard. Heck, you can simulate a much more risky jaywalking scenario with a mannequin on a dolly than with a living person.

Waymo is deploying its fleets on public roads too, which suggests to me that they think that private course tests can only get you so far.


> Waymo is deploying its fleets on public roads too, which suggests to me that they think that private course tests can only get you so far.

No, it just suggests that building and operating huge private courses that realistically emulate daily traffic situations in cities is much more expensive than just (ab)using the "real" public infrastructure paid for by tax dollars for your beta-testing needs, and that Waymo (just like Uber) takes full advantage of this chance to privatize gains and socialize losses.


Can't speak for Uber, but Waymo/Google apparently has billions of miles tested on closed track, vs just 4+ million miles on the open road:

https://medium.com/waymo/waymos-fleet-reaches-4-million-self...


Every day self driving car is not tested in real life environment is a day self driving technology delayed and a day worth of hundreds of human lives.

If anything, we are morally obliged to double down.


Yes, we're obliged to double down. On safety. The Uber self driving cars are known to be unsafe around pedestrians. Or another way of saying it "people walking". Or just "people". Like you and like me. I had one try to run me down in the crosswalk (utterly failed to yield to pedestrians in the crosswalk) while Uber was running their ill-fated test in San Francisco. I walked into their offices on Harrison and asked to file a bug report. They laughed it off. Saw the exact thing happen a couple days later. They need to stop till they can figure it out how to do it safely.


Absolutely not. Current data point at self driving car being more dangerous than human driven cars. Double down in real environment testing and you’ll expose the public to increased danger until software becomes good enough - you’re basically telling random people to share publicly the risks so other people in the future may or may not be safer while private companies profit from the fruit of the research and, sincerely, screw that: public test should start after controlled testing proves the system to be at least not more dangerous than the status quo and the companies should use testers that opt in and are adeguately compensated for their liabilities to injuries instead of killing off random bystanders for the “greater good” which in trith is how they name their pyroll savings


When considering whether they're net saving lives we are now doing so from the starting point that self driving cars presently have a worse fatality record per real world mile driven than humans, including the portion of miles driven by humans who are unfit to drive or wilfully reckless.

Self driving cars can also be tested in real life environments with safety drivers behind the wheel whose own driving shortcomings are unlikely to coincide with software error. This is believed from the OEM's own telemetry to have prevented several accidents with early generation Waymo (Edit: and apparently one was present in this vehicle and was unable to prevent the accident)

Of course, there's a PR benefit of taking the drivers out of the car at this stage of their development, and it appears this has taken priority...


Sure you could test it privately. Probably quite extensively. But if you have any inclination as to how deep learning works you might agree that this would not be sufficient in any way. The only way that will really work is on real world streets.


And if you had any inclination as to how robotics (or indeed engineering!) works I’m sure you would know the value of testing dangerous and extremely immature equipment with people who consented to be your test subjects and in controlled environments. “Deep learning” is no excuse.

Certainly in the long run we would want to test in the real world. The argument is whether or not we are there yet. Other commenters have made a compelling argument, again based on the statistics, that we are not there yet. Do you disagree, and if so, based on what facts?


I never made an excuse nor said we should be testing these on live streets.

What I did say is that with deep learning it simply is not possible to "test" in a simulated environment to any level of certainty.

You can use simulated environments with test subjects to help develop such as system (and should do so). But you will never be able to adequately test such a system in this environment.

Why? The state space of traditional robotics and engineering problems is far more constrained. So much so that you can't even compare the fields in my opinion.


But you can still gather learning data from sensor-covered cars "in the wild" and then test it with contrived tests.


> "it’d be morally unacceptable to allow public deployment of software that has not been sufficiently tested"

Does "sufficiently tested" mean you get the end-product a year later? What if the end-product saves 10,000 lives a year? In that case what cost is morally acceptable to get there a year faster?


> What if the end-product saves 10,000 lives a year?

You can't have the certainty to make such statements without "sufficiently testing" the product.


> You can't have the certainty to make such statements

Can you point me to where I made a statement? I only see questions.


You're misinterpreting me. I meant "nobody" can make that such statements. As in, if you (somebody) have a a product and you (somebody) haven't tested it enough, you (somebody) can't state that it will save N lives per year.


You can't project the actual cost without sufficient testing.


Why are you so sure? Such absolute certainty -- certain enough to dismiss the loss of life for the greater good -- is the cause of so many bad things.

I'd like to believe this technology will save 20000 lives per year. But today is a time for humility and reflection.


> Why are you so sure?

Because we have the driving records of all self-driving vehicles. Based on the evidence, their accidents per 100,000 miles driven is far superior to the average driver. The law of large numbers tells us this is highly unlikely to be the result of luck.


> Because we have the driving records of all self-driving vehicles.

We do? Because one of Uber's apparent reasons for testing in Arizona rather than California (FWIW, Waymo has since moved from AZ to CA) is that there are no requirements to disclose driving/performance records, as there is in California [0].

The most we know at this point is that Uber has claimed to have driven 2+ million miles [1], which means it has a fatality rate of 50x the rate of human drivers (~1.18 per 100M miles). Yes, it's not fair to extrapolate like that, but this is just to point out that you need to cite evidence for "[self-driving vehicle] accidents per 100,000 miles driven is far superior to the average driver"

[0] https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/auton...

[1] https://www.forbes.com/sites/bizcarson/2017/12/22/ubers-self...


Don't lump Uber's hodgepodge effort launched a couple years ago in with the serious and careful efforts of groups like Waymo/Google and Cruze. Google has a decade or more of data. Uber's frantically trying to figure it out on the fly. And they're seriously failing to run safe cars.


You are assuming that all accidents are independent events and neglecting black-swan risks.


To clarify, no where in my comment did I say we should stop development for making our roads safer. Consumer accessible cars are already far more aware of the surroundings today and will continue to gain more awareness as time goes on.

If you think rushing to market, in some self-inflicted arms race for fully autonomous vehicles is morally acceptable to you, AND human lives are lost in the process, then I find that fascinating.


https://mobile.twitter.com/BenRossTransit/status/97579157290... says:

> US has 1 traffic fatality per 85 million miles driven. Uber's self-driving cars have driven ~3 million miles & killed one person

> While loss of human life is always a tragedy, individual fatalities are absolutely worth it if we can begin to reduce that count sooner than by proceeding with excessive caution.

Wait, when did I agree to be a utilitarian with you? Did I vote utilitarianism into power?


Why wouldn't you? When you deal on the large scale (running countries) you should be utilitarian.

if excessive caution causes more death and more suffering, why would you be for that?


For the same reason humanity has largely left eugenics behind and why so many countries overthrew colonial yokes and despots to form representative government.

For that matter, how confident are you that you truly understand the secondary and tertiary consequences of your choice enough to make a utilitarian choice?


It's morally acceptable to violently end the lives of bystanders if its in the name of progress. Got it.


The number you pull out is not telling. There are billions of miles driven in the US alone and people get killed on average after 100 mil miles driven. That is where the bar is, and I can tell you, that is a high bar. Even including all the idiots, teenagers, DUI's etc. it is still one fatality on 100 mln miles. So sure, it is a tragedy that those 40.000 people die, but that is not a simple problem to fix. In fact obesity and sugar caused disease is probably a much simpler problem to solve, that could likely save way more people.


Is it worth it if we exchange 3 drunk drivers' lives for 1 bystander's? Can we really just use a single measure here?


Currently, Uber self-driving vehicles are clocking in at a fatality rate of about 50x the rate for human drivers.

You can bring your hypotheticals to bear all you like, but this is a serious problem.


It's fascinating that so many brilliant people in this thread can talk so casually about any single loss of life for the tech built by the very same people in this thread. I would be devastated if I learned my code ended even one person's life.

Generally speaking, it is easier to pinpoint X thing led to Y tragedy than to quantify the tragedies that would have happened without X. It is incredibly hard to show what did not happen but "should" have. Even if you can get it straight in your own mind, good luck proving it to the world.

People are living longer and child mortality is down. We don't really celebrate that. We just complain about overpopulation and how we are destroying the environment.

When Iraq lit oil wells on fire on its way out of Kuwait, they were expected to burn for months and be a global environmental disaster. When crack teams from around the world converged and put them out far faster than expected, it was not celebrated with the same degree of fervor that it had originally been decried as a disaster.

Y2K was also predicted to be a global catastrophe. It was quietly prevented and gets remembered as "Those fools who ever thought this was a big deal!"

If self driving cars could reduce mortality compared to current rates and we were confident of that, waiting until it is perfected before releasing it means accepting ongoing deaths that don't have to happen. It's not uncommon for people to feel that new tech needs to be perfected while missing the fact that it may be a big improvement over the current status quo without being perfected.

I don't really know anything about self driving cars. I have mixed feelings about replying to your comment because I don't actually know how I feel about this particular issue. But the thought process you are putting forth is both common and not very pragmatic.

I guess a pithy rebuttal would quote some saying about the perfect being the enemy of the good. But I think that leaves out a lot of important points.


Thank you for adding this perspective. Honestly I am not sure how to rationalize the dueling thoughts in my head.

On one hand, the "just one isn't so bad" argument is rational. In terms of fatalities per mile, self driving cars already seem to be safer than humans as a class. Therefore it would be wrong of me to say that this tech does not belong on the road when I believe I should be allowed to drive.

On the other hand, you said it best. If I ever wrote code running on a machine that killed somebody I don't know how I'd continue in my career. Hell, I work at Google very far from Waymo but if one of our cars struck and killed someone I'd feel some personal responsibility.

So ... I guess it's complicated.


> I would be devastated if I learned my code ended even one person's life.

I agree, but there is already a lot of software in a car that can kill people in case of an error. In particular the airbag have multiple sensors and a complex heuristic to decide when the have to open, which ones, full or partial strength, ... https://en.wikipedia.org/wiki/Airbag#Operation An airbag that doesn't open can kill someone. An airbag that open unnecessary can kill someone. From Wikipedia:

> From 1990 to 2000, the United States National Highway Traffic Safety Administration identified 175 fatalities caused by air bags. Most of these (104) have been children, while the rest were adults. About 3.3 million air bag deployments have occurred during that interval, and the agency estimates more than 6,377 lives saved and countless injuries prevented.

So, there is already a lot of programmers that have to be careful because an error can kill someone in the car (or the plane, or a medical machine, ...) I guess it's more easy to ignore the people killed by airbags because we are use to them and we imagine that they have a simple sensor (something like a light switch. So in case of a failure nobody is guilty.

With a self driving car, the first problem is that it is new and scary, and the second problem is that in an error that creates an accident we are use to blame the driver. But now we don't have anyone to blame (perhaps the programmer).


My career is in software, but when I peer out of my cave at those in more traditional engineering backgrounds, I realize they have so many more checkpoints and structures for guaranteeing performance / operability under load. For 99% of us in software, if our software fails, nothing really happens of consequence other than downtime - no one dies. In this case, the software written can have mortal consequences. Maybe there's a need for increased scrutiny on software development - especially as IoT gains more popularity and people are relying more and more on the software that supports or replaces human involvement in critical and/or potentially high risk activities.

Raises the question though: will stakeholders be willing to pay the upfront financial costs to obtain the level of quality and assurance that other engineering disciplines provide? We'll see.


What we can't do as the creators of the future of autonomous vehicles is give in for emotional appeals when the relevant thing is statistics about car accidents. If the media does it, bash the media (and hope for decent publications to support the appeal of relevant people in the field). If politicians do it, bash politicians, though nothing really helps when it comes to politics. But don't be nice to bureaucrats and their oblivious supporters who are actually preventing thousands of lives being saved every year by slowing down development and deployment.


>Either we go out into the public roads with very high certainty that OUR software and OUR hardware won't kill someone, or we wait till we get there.

As long as it kills fewer people than would otherwise have died due to negligent human drivers I'd say it's already there.


Crossing a 4 lane road (where speed limit is at least 40mph) outside crosswalk? It's clear that it is cyclists fault. The only thing that is questionable - would a human driver have avoided this or at least hit the cyclist with lesser speed.


As someone said on Twitter, among software developers, "ethics" is seen as a hobby at best.


While pithy and edgy, this attitude is, unfortunately, also unconstructive and uninsightful. For those out there who do wish to put their money where their mouth is, do you have any advice besides “don’t work at Facebook”? (Or Uber…) Because that doesn’t seem to be cutting it.

At least doctors have the Hippocratic oath.


do you have any advice besides “don’t work at Facebook”?

Sure: start by reading Spinoza's TIE.


>The self-driving Volvo SUV was outfitted with at least two video cameras, one facing forward toward the street, the other focused inside the car on the driver, Moir said in an interview.

>From viewing the videos, “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway,” Moir said. ( https://www.sfchronicle.com/business/article/Exclusive-Tempe... )

Not saying the software wasn't partly at fault but it doesn't look that clear cut. It sounds like better sensors could have helped.


Shouldn't '[Coming] from the shadows' be irrelevant when the sensors are LIDAR?


One also needs to consider the relative dynamic range of the camera/video in terms of both sensors and playback.

If the hardware is not capable of capturing sufficient dynamic range to be able to see into the shadows while driving in daytime/nighttime in this location, one must ask why not? That seems obviously negligent.

If the car can see into the shadows (say either with lidar or sufficiently sensitive hardware) then a self driving car either just failed to identify and predict the object and subsequently killed a pedestrian, or it detected them and killed them anyway.

If it killed them anyway, it may be that it was physically impossible for them to avoid the detected object, or forensics (if properly performed) would show that the collision was avoidable.

All of these scenarios seem pretty inconsistent with the hypothesis of the police chief knowing what the duck he's taking about, and I question what the hell he's doing releasing such a statement before the evidence is in...


Is it normal for police to give these kind of statements right away? I thought they need to present evidence to the court of law instead of giving verdict themselves.


> Elaine Herzberg, 49, was walking her bicycle outside the crosswalk on a four-lane road in the Phoenix suburb of Tempe about 10 p.m.

> Tempe Police Chief Sylvia Moir said that from viewing videos taken from the vehicle “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway."

This seems like an unfortunate accident, but it's not at all clear that the car was at fault or that it could have done anything to prevent it. There was even a human behind the wheel and they didn't react in time either.


I can't describe how shocking it is watching many HN commented blame a person killed by a machine and defend the thing instead. There are even those who are saying that such deaths are necessary and useful for the advancement of technology.


I see the Waymo cars while I walk to lunch (AZ) quite often. One time while walking through the crosswalk a Waymo turning right cut me off as I started into the crosswalk despite my having the walk indicator. This seems to be a very difficult scenario for a self-driving car as there are often people at the corner that are either crossing the other direction or just waiting on the corner. I sent them a note via their feedback link, but never heard back.


This is a non trivial situation for human drivers, too, when I am on a bicycle or driving a car I have to wait and look at the pedestrian until I am sure of their intention. That seems like a very difficult problem in that a lot of times people are on their phone or watching a child or dog, I often have to slow to a standstill and speak to people to ask them what they are planning on doing.


Indeed. Right on red is unsafe for pedestrians. I guess at least the requirement to stop before turning helps ensure the speed of impact has minimal consequences.

As a driver, there are certain heuristics, e.g. did my traffic light just go red, or is the pedestrian signal/traffic light in the same direction about to go green.


I think the situation was been right on green, which I argue is more dangerous for pedestrians.

The answer is to do what denser European cities do, which is to have pedestrian crossing signals have their own turn while all vehicle lights are red.


The last time I visited Phoenix, I recall being shocked at far apart traffic lights were in the city. Between the Wal-marts and the giant four-to-six bedroom homes, it might be normal to see a major street have a mile between traffic lights (and presumably, crosswalks). It was very different from, say, driving in New York, San Francisco or Seattle.

I'd be curious whether "walking outside of the crosswalk" turns out to be material in the analysis by NHTSA/NTSB. Were the crosswalks big enough and close enough that a pedestrian would have normally walked inside one? Was the interchange a regular 90-degree crossing of two roads, or was it something more irregularly shaped? What is the annual rate at which pedestrians get hit at this intersection by human drivers? Do self-driving cars need more examples of "pedestrians walking outside the crosswalk" in their training sets?


I hope they aren't expecting people to only appear at intersections:

"The California Vehicle Code says you can actually cross any street as long as you aren't a hazard to vehicles. It is also legal to cross mid-block when you're not between two intersections with signals." [1]

The article points out that LA has a different rule. Uber's going to need to geofence locations and use different rulesets for different cities.

[1] https://www.scpr.org/news/2015/04/14/50992/how-do-you-cross-...


Seems like the law is more vehicle-friendly in AZ:

https://www.azcentral.com/story/news/local/glendale/2014/09/...

> According to Arizona law, pedestrians are supposed to cross within marked crosswalks, or at unmarked crosswalks at intersections. An unmarked crosswalk is the location where two roadways intersect but no marked crosswalk is present, for instance in a residential neighborhood. Jaywalking is technically crossing "between adjacent intersections at which traffic-control signals are in operation." In this instance, pedestrians shall not cross at any place except in a marked crosswalk.

But why would state-by-state (or city) geofencing be needed for this situation? Seems like pedestrian safety would be prioritized regardless of whether the pedestrian seems to be committing jaywalking. Is there a situation in which the AI shouldn't attempt to brake if a large object appears in the street in front of it? And how big of a factor would local laws play?


"Uber's going to need to geofence locations and use different rulesets for different cities."

May be not. Regardless of the street crossing law in a specific city, it is a bad idea to kill pedestrians.


Sure, but it’s illegal in most countries (AFAIK also in the US) to not use a crosswalk when you are in the vicinity of one. Eg in Austria you will be fined (or worse, drivers license suspended) if you don’t use a crosswalk that is within 50m of you.


You should be killed by an autonomous vehicle because you jay-walk under any circumstances. I hope you realize you are comparing being fined for a tiny crime and getting killed.


Former Lyft eng here. From my vantage point, as an industry, we're nowhere near where we should be on the safety side. The tech companies are developing driving tech privately instead of openly. Why can a private for profit company "test" their systems on the public roads? The public is at serious risk of getting run over by hacked and buggy guidance / decision system. Even when a human operator has his hands hovering 1 inch off the steering wheel and his foot on the brake, if the car decides to gas it and swerve into a person, it is probably too late for the human crash test driver to overtake. This is going to keep happening. The counterargument FOR this is that it is overall a good idea for the transportation system if the number of crashes & deaths is statistically less than human operated cars. I see this as the collision of what's possible with what's feasible and that we are years away from any of this being close to a good idea. :( Very sad for the family and friends.


Perhaps companies need to test their safety devices first. I.e., first prove that their LiDAR correctly identifies pedestrians, cyclists, etc. From there, build test-vehicles with redundancy, e.g. with multiple LiDAR devices. Then prove that the vehicles actually stop in case of emergency. And only then actually hit the road.

Of course, the US department of transportation should have set up proper certification for all of this. They could have easily done so because they can arbitrarily choose the certification costs.


What you're describing is a driving test that every human needs to go through before they're allowed to drive on public roads. Something that can be revoked temporarily or permanently.

I would be very interested in a 3rd party (government or private) create a rigorous test (obstacles, weather conditions, etc...) for self driving vehicles. Becoming "XXX safety Certified with a Y Score" for all major updates to the AI could help restore confidence in the system and eliminate bad actors.


How about if we start with a test no human driver is given:

Identify the objects in pictures.

We take our biological vision systems for granted, but it seems one autopilot system couldn't identify a semi crossing in front of the vehicle...


In some countries you have to pass a medical examination which includes a vision test.

The driving schools also have theoretical tests where one has to identify objects in a picture, interpret the situation, and propose the correct action. Of course, these tests are on a higher level: "this is a car, this is a pedestrian" vs. "you're approaching an intersection, that car is turning left, a pedestrian is about to cross the street, another car is coming from there etc."

Not to mention the road and track tests a driver has to pass which include practicing controlling the car in difficult conditions: driving in the dark, evasive actions on slippery surfaces and so on.

Edit: In my opinion it's insane to allow autonomous vechiles on the roads without proper testing by a neutral third party.


>the US department of transportation should have set up proper certification for all of this

I think you're severely underestimating the path that something like this would have to take. The certification itself would be under so much scrutiny and oversight that it would take years for that to get done. Unfortunately, the technology is far more readily available and easy to get working than the political capital required to create a certification for this.


if we wait for the gov to set up a certification for this, we'll delay the whole industry 10 years.


And?


Cost 1000s if not millions of lives. You understand over a million people die every year due driving? They system is not working.


A "million" people do not die in the U.S. every year from driving. Not even close:

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...

Not that 37,000+ is a great number, but I don't think many of the detractors here are arguing that Uber et. al have a perfect record. Just that it's possible that progress is being made in a more reckless way than necessary. Just because space flight is inherently difficult and risky and ambitious doesn't mean we don't investigate the possibly preventable factors behind the Challenger disaster.

edit: You seem to be referencing the worldwide estimate. Fair, but we're not even close to having self-driven cars in the most afflicted countries. Nevermind AI, we're not even close to having clean potable water worldwide, and diarrhea-related deaths outnumber road accident deaths according to WHO: http://www.who.int/mediacentre/factsheets/fs310/en/


Yeah but the tech will spread there fairly soon after it's established in the US. In places like Africa the most common cars are not some African brand, they seem to mostly be Toyotas, who will probably implement self driving when it's proven.


For what value of “soon after” is very expensive automation going to reach Africa, India, and other places in numbers sufficient to put a dent in those fatalities? The slow march of other tech, safety included, suggests decades. Meanwhile the safety gains of automation are so far hypothetical, amd until they’re well demonstrated, potentially a distant pipe dream. Nothing about ML/AI today suggests a near-future of ultra-safe cars.


Wow, let's just put people in bubble suits so they don't hurt themselves. It's ridiculous to say people shouldn't drive cars because it's possible to hurt themselves or others. We might as well outlaw pregnancy for all the harm that can come to people as a result of being born.


> if we wait for the gov to set up a certification for this, we'll delay the whole industry 10 years.

That's not a particularly convincing argument, given that (so far), Uber's self-driving cars have a fatality rate of 50 times the baseline, per mile driven[0].

Having to wait an extra ten years to make sure that everything is done properly doesn't sound like the worst price to pay.

[0] Nationwide, we have 1.25 deaths per 100 million miles driven. Uber's only driven about 2 million miles so far: https://www.forbes.com/sites/bizcarson/2017/12/22/ubers-self...


In those 10 years ~350,000 people will die in car accidents in the US alone.

Let's say that halving the death rate is what we can reasonably expect from the first generation of self driving cars. Every year we delay that is 15,000 people dead. This woman dying is a personal tragedy for her and those that knew her. However, as society we should be willing to accept thousands of deaths like hers if it gets us closer to safer self driving cars.


> Let's say that halving the death rate is what we can reasonably expect from the first generation of self driving cars.

What's your evidence for why this is a reasonable expectation? The fatalities compared to the amount of miles driven by autonomous vehicles so far shows that this is not possible at the moment. What evidence is there that this will radically improve soon?


Why should we accept those deaths? This is like saying we should let doctors try out surprise untested and possibly fatal therapies on patients during routine check ups if their research might lead to a cure for cancer.


This is a silly interpretation of the data. You can tell because up to now, Uber could've been characterized as having an infinitely better fatality rate than the baseline. Which also would've been a silly thing to say. If a single data point takes you from infinitely better to 50x worse, the correct interpretation is: Not enough data.


> You can tell because up to now, Uber could've been characterized as having an infinitely better fatality rate than the baseline. If a single data point takes you from infinitely better to 50x worse, the correct interpretation is: Not enough data.

No, you couldn't have characterized Uber has having "infinitely better" fatality rate than the baseline, because that would have resulted in a division-by-zero to calculate the standard error. Assuming a frequentist interpretation of probability, of course; the Bayesian form is more complicated but arrives at the same end result.

It's true that the variance is higher when the sample size is lower, but that doesn't change the underlying fact that Uber's fatality rate per mile driven is empirically staggeringly higher than the status quo. Assigning zero weight to our priors, that's the story the data tells.


You're talking statistics. I'm talking common sense. Your interpretation of the data is true, but it isn't honest. As a response to scdc I find it silly.


Nope. Error bars do exist, and with those attached, the interpretation of the data before/after is consistent. Before it was an upper bound, after it is a range. Every driven mile makes the error on it smaller.


Under 50 times. Still horrible, of course.

https://www.androidheadlines.com/2017/12/ubers-autonomous-ve...

Presumably a bit more since December.

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...

Fluctuates just over 1 per 100 million.

(1/2+ million) / (1+/100 million) ~ 50-


You've hit upon one of the most obvious ways to improve the safety of these systems. Deploy the systems more broadly, without giving them active control.

Then, you can start to identify situations where the driver's actions were outside of a predicted acceptable range, and investigate what happened.

Additionally, if you have a large pool of equipped vehicles you can identify every crash (or even more minor events, like hitting potholes or road debris) and see what the self-driving system would have done.

The realistic problem is that Uber doesn't give a shit. As such, deployment will never be optimized for public safety. It will be optimized for Uber's speed to market.


Even worse: There are people who test their DIY self-driving hacks on public roads:

https://youtu.be/GzrHNI6eCHo?t=100

Using https://github.com/commaai/openpilot , which is cool but not on public roads.


That is indeed reckless, but that guy is testing open source self-driving technology.

Given that all the main car companies are keeping their technology private I don't see how open-source systems are supposed to keep up without people doing this.

I think you can also contrast this to the many more people who decide to drink alcohol and then drive.


Unless things have changed, all the important parts aren't open source -- neither the vision nor the decision pipeline.


https://github.com/commaai/openpilot this is the bit that needs training as far as I'm aware.


> I think you can also contrast this to the many more people who decide to drink alcohol and then drive.

Sure, but drinking (more than a little) and driving is already illegal, so they're not exactly comparable.


in germany you would get punished for doing something like that. (well somebody would need to catch you first, of course)


> Why can a private for profit company "test" their systems on the public roads?

I mean, this is the way cars have always operated. It is somehow any different to let them "test" there brake system, there acceleration systems? Car companies have always been able to do whatever they want, who cares about the deaths.

For some reason, when it comes to roads, we all just accept a certain amount of deaths each year. Over a million people are killed every year around the world, up to 40,000 in the US alone. That is acceptable according to every car driver.


Car companies have (multiple) private tracks that they stress test their vehicles on. Car companies are extremely risk averse. Public road testing for something safety related is going to be very late stage in development/testing, if at all.


Some level of integration, testing always has to meet the real world. Maybe in IT we would call a "canary" instead of a test. I once had a colleague who had been a test driver for a major European car firm. His job was to drive cars around in real traffic.

Self-driving cars need this all the more, since most of the hard and scary stuff is the integration of many systems and how they deal with unexpected scenarios. And worse still -- in a traditional car, you had an expert test-driver like my friend who could, say, take evasive action if the brakes didn't work.

But in these cars, the driver is the system-under test.


> Car companies have (multiple) private tracks that they stress test their vehicles on. Car companies are extremely risk averse. Public road testing for something safety related is going to be very late stage in development/testing, if at all.

Yes, and so do these guys. They might be somewhat risk averse, but lets not pretend they have in the past made sacrifices that have cost lives (Pinto, etc...) and there are constant recalls going on.


Google at least from what I read has all of that.


> That is acceptable according to every car driver.

All of them, I mean you took a poll? that must have taken a while.

My understanding (as a non-US resident) is that parts of the US are built around the assumption you drive.


> parts of the US are built around the assumption you drive

Yes, this is 100% true. Arizona included.

It's difficult, usually impossible, to buy-out of driving (or being driven) in most places in the USA. Ask DART why rapid transit ridership in the Dallas area is falling despite a growing population and a billion-dollar annual budget.


> All of them, I mean you took a poll? that must have taken a while.

If someone wants to pretend it is not acceptable while still driving, that is just silly.


The FLOSS side has argued for openness in critical systems for ages. Medical devices such as pacemakers, air plane systems, automatic breaks on cars, voting machines, and the list just goes on. If the device get hacked or has any bugs then people will likely die (except for the voting machine), but the code is kept private even for those running in devices which literally gets operated into your chest and which a person has to trust with their own life. Governments all over the world seems to be very much against the idea to require that such critical code is made openly for the sake of safety, as that would leak the oh so important trade secrets.


When regular cars first appeared, were they not tested "live" immediately? And yes, there was FUD even back then, and in parts of the world a backslash which resulted in sometimes silly safety rules.

The largest real difference seems to be the speed of communication nowadays, the rate at which public opinion can be manufactured and globally spread.


When regular cars first appeard vehicles had to be led by a pedestrian waving a red flag or carrying a lantern to warn bystanders of the vehicle's approach.

https://en.wikipedia.org/wiki/Red_flag_traffic_laws


And a countless number of people have been killed.


Real cars were an evolution of an existing technology (the carriage). We replaced a horse with an engine and a steering wheel.

In this case have a machine replacing a human and using non deterministic algorithms to do so.

Automated driving ONLY makes sense on specific, well mapped, sensor enabled, pedestrian free roads. Let’s focus on that use case before we let these things take over our towns and cities.


Such roads exist today, they are utilising two steel rails and are generally named railroads.


I'm for a middle-of-the-road way of doing things at first: make some highway lanes for autonomous cars only. Separate those lane physically. Remove the speed limit there to encourage people to get cars which can use those lanes.

Implement a way for the highway operator to affect the cars on it: make way for emergency vehicles, adapt to infos the operator can aggregate and transmit, change the speed of cars to the slowest one around etc.

Once the majority of people have self driven cars and those have been proven in this controled environment you can think about allowing them on open roads. Or not. Or adapt those open roads to what self driven cars need.


> make some highway lanes for autonomous cars only. Separate those lane physically

sure, makes sense to me.

But God, that would cost a fortune.


> But God, that would cost a fortune.

It would. Like an infrastructural job. I mean, it would be like railways for individual trains.

But that's kinda-sorta the job expected from government and what taxes are for: paying for things the individual would have problems doing.


What do you mean? Everyone making these comments is fairly certain that this is a completely trivial endeavor.


GM's "super cruise" fills the exact role you're describing aside from removing the speed limit, and aside from the billions of dollars in public infrastructure spending and increased traffic for everyone else.


> Why can a private for profit company "test" their systems on the public roads?

Because at some point, it's literally not possible to test something that's ultimately intended for use on public roads anywhere but public roads. Or, to put it in terms that would be more familiar to developers, "you can test all you want in QA, but prod is the ultimate test of whether your code really works".

As mentioned downthread, the problem here is that Arizona basically gave companies full leeway, without having to demonstrate that they had done sufficient tests internally before putting the cards on public roads. Apparently they're not even required to report data on accidents to the state, which is absurd.

I wouldn't be surprised if Uber were ultimately found to be cutting corners, but Arizona is also responsible for rushing to allow self-driving cars on their roads without doing their own due diligence first.


This is what it means when politicians say they’re cutting red tape. They’re letting industry do whatever the fuck they want at the expense of everyone.


When it is deemed necessary to test the vehicles on public roads after they have shown to be mostly safe, ALL of these vehicles should have flashing lights like any emergency vehicle.

I'm stunned that when I drive in Texas, there don't seem to be regulations over which vehicles can have flashing lights and with which colors. Yet in these other cities you have experimental robots that are designed to blend in with other cars as much as possible. NO! Make them stick out.


Which is hilarious to me because AZ is the bastion of those "Don't treat on me" stickers and all this anti-regulation sentiment...until someone gets killed by a lack of regulations and then everyone wants to lock everything down and blame the government for not doing their due diligence.


>Why can a private for profit company "test" their systems on the public roads? The public is at serious risk of getting run over by hacked and buggy guidance / decision system.

Because the for profit company has a say in government and lawmaking, the person getting run over doesn't.


>Why can a private for profit company "test" their systems on the public roads?

Some phase of testing will necessarily be of public roads. The fact that they started that phase prematurely doesn't somehow mean that "private for profit companies" should never test on public roads.

Boeing tests new airplanes in the public airspace too, over public areas.


Boeing certainly tests in public airspace, but only as part of a standardized certification process overseen by the US government - a process and set of regulations that are written in blood.

Uber abides by no such testing standardization.


Which would be a different standard than the one I was criticizing.


I strongly agree with this sentiment. If this technology is going to be tested on the public then all the datasets from these vehicles should be openly shared and development should be done cooperatively. Let the market decide winners based on metrics like cabin comfort and leasing terms rather than odds of causing grievous injury.


> Why can a private for profit company "test" their systems on the public roads?

My thoughts exactly. It is wrong. We have a chance to do this properly, openly and together, but instead our governments are letting private companies put our lives in danger.


You are right on the money. Also, why must self driving cars be on the roads with human drivers at all? We're an unpredictable bunch, and lives are at stake! We should simplify the problem domain and keep self driving cars away from other human drivers and pedestrians (even once the technology reaches a very advanced stage). When they are actually ready to go mainstream, then think about putting them on roads with humans (and even then, dedicated routes and infrastructure specifically for self driving cars may be the safer way to go).


To be fair though, it's not exactly like we're where we need to be at on the safety side from a human standpoint. I'd take my chances with a robo-car over a 16 year old texting any day.


That's why many unnamed companies in the self-driving space are testing solely on enormous closed-off tracks in the South Bay.


It is simple: Tech companies are allowed to test their systems on public roads because the communities/states have given them permission to do so(with restrictions like roads allowable, max/min speed allowable, safety driver present behind wheel, etc).


I can't imagine you would want any major new transportation technologies released without being tested on public roads?


There shouldn't be any testing on public roads until the executives and engineers are willing to put on blindfolds and spend six hours running back and forth across a track on which a hundred of their autonomous cars are maintaining 60 MPH in both directions.


> Why can a private for profit company "test" their systems on the public roads? The public is at serious risk of getting run over by hacked and buggy guidance / decision system.

Interesting way of putting it. Kinda echos the "privatize the profits, socialize the losses" sentiment from the financial crisis.


We already have a system for privatizing negative externalities in traffic accidents (and in 95% of the rest of life). It's called tort law, and it will certainly be used in this case if Uber was at fault.

There are of course situations were externalities fail to be privatized (e.g., traffic congestion), but I don't think auto accidents is one of them.


Someone died. There is no way to truly privatize that externality. Money won't bring the woman back.


Do you drive? How can you drive and not be aware of the 40,000 deaths a year in the US, and pretend you aren't part of that system.


Do you program? How can you program and not be aware of the X deaths, Y privacy violations and pretend you aren't part of that system?

Is every programmer in the world partly responsible for every death caused by every programming bug ever? Should we blame all civic engineers every time a bridge collapses?

I am an individual, not "part of a system". Simply being a driver doesn't make me a reckless or dangerous one. For example, 10K of those deaths involve DUI, and I am certainly not part of that system. I'm sure 95% of drivers never kill anyone.


>I am an individual, not "part of a system". Simply being a driver doesn't make me a reckless or dangerous one. For example, 10K of those deaths involve DUI, and I am certainly not part of that system. I'm sure 95% of drivers never kill anyone.

Massive misunderstanding. As a driver your actions form a part of the traffic around you, even responsible drivers are "attached" to the system and put pressure on it. Maybe one day somebody rear ends you and you are knocked forward into a crosswalk, or you break suddenly to avoid hitting a pedestrian and cause an accident behind you. Or maybe your very safe driving leaves enough room for somebody to take a foolish risk cutting in front of you and lose control of their car. Collectively society accepts that getting places quickly is worth the cost that we pay in human lives. And each road user agrees to the small risk of death/injury beyond their control in order to get somewhere.


I'd guess that in the US, at least, 99.2% of drivers never kill anyone while driving (including themselves).

(back of the envelop, assuming 300,000,000 drivers driving for 60 years of their lives at a 40,000 per year death rate)


It's almost impossible to drive a car without threatening small children and other people with lethal force. People regularly wait on the sides of roads when every traditional custom, convention and current law demands that car drivers stop, but they never do it. "I want to get to the shops quicker" is not an excuse for levelling a gun at someone and asking them to please step aside.

Moreover, it is illegal to drive a car safely. If you try it, you will be pulled over. You may be fined or lose your licence. Try it and see. So yes, being a car driver does make you a dangerous one.


Unless you define what you mean by "drive a car safely" I don't see how one can even begin to evaluate that claim meaningfully.


Perhaps the same standard should be applied to human drivers: if a human makes a mistake and kills somebody, the rest should be taken off the road for retraining.

I wonder how the issue developed when the automobile was first introduced. I remember something about needing to have a guy walking in front of the vehicle waving a warning flag, but apparently that didn't last long.

Found it - https://en.wikipedia.org/wiki/Red_flag_traffic_laws


It seems like a faulty analogy, unless you want to tell me that every Uber car is running unique software.


Yeah, possibly. On one hand, the Uber setup should be examined for flaws. On the other hand, we shouldn't get paranoid about the occasional traffic death when we've already decided (for human drivers) that this is acceptable collateral damage in the transport system.


The entire pitch for autonomous driving is that it's safer than human drivers. If that's false it's not clear why we should continue to allow Uber to test their work on public roads.


It could be safer than human drivers and still cause quite a few deaths. We can't evaluate such statistics from one accident. We don't even know if this particular accident would have been avoided by a typical human driver (there was apparently one behind the wheel of the vehicle).


I think the world can probably stand to wait a bit of time while precisely this question is investigated.


That's reasonable while the system is in testing and in small scale use. If it ever becomes a major part of the transportation system, it won't be desirable to shut it all down every time there's an accident.


In my view, it should not become a major part of the transportation system until we are confident that it's actually safer.


Not a meaningful counterargument here. Someone can not only drive, but have a history of having been an at-fault driver causing death, yet justifiably not believe that: public roads should be used for private testing such that the profit is private, but the harm is socialized; or that money compensates for lost lives.


And yet, people and companies undertake activities all the time that involve risks to life, and we do indeed attach a monetary value when those lives are harmed.

Or, alternatively, we do indeed have a cap to the amount of resources we are willing to expend to save a life - lives are not of infinite value.


Wat a completely terrible thing to say. This isn't about saving a life, this is about not killing somebody. Just because a company like Uber thinks it can make a lot of money doen't mean it can simply take risks like these.


Every time you are selling food you take the risk of killing people if something goes wrong. And what about carrying people in planes. These risks are taken continuously, for profit. How is that different?


Both of those industries have tremendous regulations in place to prevent accidents and injuries. If someone gets salmonella poisoning and it is traced back to a company, there is a massive recall at the company's expense. Air travel is one of the safest modes of transportation available (statistically) because of the NCTB and the rules/regulations put in place after each and every accident.

That's how it is profoundly different.


Air travel is only safe because companies have taken these risks with people's life. A society that takes no risk is a society that will achieve nothing new.


Air travel and eating food at a resteraunt is an opt-in action. To avoid this risk, you would have to opt-out of using the public road system that you are required to use.


We wouldn't except it if people got killed by selling them poisoned food, at least not where I'm from. Plane crashes are investigated and licenses are suspended and blacklists are kept, furthermore software is tested and verified before it is used in production. We shouldn't except excessive risks, see regulations with truck and bus drivers, just because a profit can be made.


But what makes you think Uber didn't test their software? When Boeing introduces carbon fuselage it is taking risks with people's life. They do reasonable testing but a technology isn't proven until it has been widely used for a long time. No risk = no innovation.


Since there is absolutely no binding federal regulation I don't have a lot of confidence that the level of testing is comparable to what's done for airplanes.


It's all well and good to not like the choice, but the choices still have to be made - how much are we willing to give up economically in order to reduce immediate risk to lives? Included in this must be the consideration that economic value can be used to save lives, through higher living standards and better health care.

Every regulatory system in the world has to consider these things, explicitly or (more commonly) implicitly. See e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1633324/ for the application of the idea to healthcare, or http://www.nytimes.com/2011/02/17/business/economy/17regulat... for the public policy implications for environmental regulation.

Or more relevant yet, this "guidance on valuing reduction of fatalities and injuries by regulations or investments" by the Department of Transportation: https://www.transportation.gov/sites/dot.gov/files/docs/VSL_...


Did you ever take an engineering ethics course? One of the things talked about is the monetary value placed on a human life. You can't make that value infinitely high or literally nothing can happen. You also don't want it super low.


We get it, its sad and money isnt everything, but the world cant stop because of a single death. If there is legal liability here then itll be handled.


The world doesn't have to stop, just testing self-driving cars live on public roads with very little evidence that the tech is ready.


More evidence than human drivers, the same human drivers who are legally responsible/liable for this incident.


This attitude went great for Ford with the Pinto, as I recall.


Courts are equivalent of exceptions in computer programming. You should have them for exceptional circumstsnces but you shouldn't use them in your program for routine control flow (because they are slow among other things).

Unless Python.


If you believe that then you need to change the entire economy.


Other countries don't work the way US does. Instead of suing company you just notify the regulator and they do an audit. If company is at fault it is fined and ordered to correct itself and monitored if it does. You can get some money from the company but it won't be a payday for you.


Manslaughter doesn’t fall under tort law, IIRC.


Tort law covers death. Manslaughter is a criminal charge but wrongful death, etc. are how tort law handles the situation. You may recall O.J. Simpson was sued for wrongful death and lost.

http://www.nytimes.com/1997/02/11/us/jury-decides-simpson-mu...


Wouldn't one typically press both charges? Wrongful death to allow the family to be compensated, corporate manslaughter to punish the company and incentivize not killing people tomorrow. It doesn't seem like it's necessary to choose the non-criminal case.

To state the obvious, I am not a lawyer.


Civil suits are intended to serve both purposes - compensation and deterrent (punitive vs compensatory damages). E.g. if you sue a company for doing you damage, the compensatory damages are based only on the injury you suffered, but punitive damages' presence and severity is affected by things like the level of negligence of the company.

The theoretical difference with criminal charges is that criminal charges deal with harms to the difference sovereign (ie the state and the public) that need to be punished regardless of the wishes of the immediate victim. e.g. you're not allowed to settle a murder charge out of court.


It depends on the details of the case. (IANAL either.)

Criminal law applies to criminal acts. It's possible for do something wrong that "injures" someone (physically or in some other way) without committing a crime.

Criminality generally requires wrongful intent. If you simply screwed up while otherwise obeying the law then you haven't committed a crime (negligence can be a crime, e.g. if you are more careless than a "reasonable man" would be this is in itself a form of bad intent -- so swinging swords around in public places while blind-folded isn't OK).

Also, tort law has a different standard of proof -- it is resolved based on preponderance of evidence rather than proof beyond reasonable doubt, so there's a lower bar than for throwing someone in jail.

Oh, and it's harder to prove criminal charges against nebulous entities. (Who's the criminal in this case? Uber's CEO? The head of the software team? The person who gave uber a permit to test their crap on public streets?)

Uber's overall conduct might approach the point of "criminal enterprise" at which point RICO statutes might be invoked. Not likely though.


The definition of manslaughter varies between states, and I am not a lawyer, but in general, criminal manslaughter charges only apply if there is an element of extreme negligence or doing something unlawful (like driving in an illegal manner) and causing a death.

For instance, there are accidents that cause deaths all the time; but they would only be prosecuted as manslaughter if someone were doing something considered inherently unsafe and unlawful.

If you run a red light, and kill someone in the process, it's possible you could be prosecuted for manslaughter, though even then you might not be if there could have been extenuating circumstances (sun in your eyes, etc).

In this case, there would only be a chance of a manslaughter charge if Uber were somehow being particularly negligent. In this case, it sounds like there was a human operator in the car, though the car was in autonomous mode. It's possible that the operator wasn't paying the attention that he or she should have been, or it's possible that even the human operator didn't see the pedestrian in time. It mentions that the pedestrian was crossing outside of any crosswalk, so it's possible that this was a tragic accident of them trying to cross a street with traffic without having been sufficiently careful.

On the other hand, it mentions a bicycle, but also says the victim was walking, which I find odd:

  "Elaine Herzberg, 49, was walking outside the crosswalk on a four-lane 
  road in the Phoenix suburb of Tempe about 10 p.m. MST Sunday (0400 GMT 
  Monday) when she was struck by the Uber vehicle, police said." 

  Local television footage of the scene showed a crumpled bike and a Volvo 
  XC90 SUV with a smashed-in front. It was unknown whether Herzberg was on 
  foot or on a bike.
Anyhow, it is relatively uncommon for drivers to be charged with manslaughter in this country; they generally have to be doing something quite egregious, like driving drunk or driving at 90 in a 30 mph zone. Most of the time, the response to people being killed in traffic accidents is just that accidents happen.

Unless the operator was being particularly negligent, or there was some serious and known problem with the self driving car but it was put on the road anyhow, I doubt any manslaughter charges will be filed.

Remember, Arizona has explicitly been encouraging the testing of self-driving cars, so I expect that testing a self driving car, with an operator to take over in cases which it can't handle, would not be considered unlawful or extremely negligent. Maybe what the operator was doing could be, but we'd need more information before it would be possible to tell.


> On the other hand, it mentions a bicycle, but also says the victim was walking, which I find odd

The police clarified that the victim was walking her bike across the street: https://twitter.com/AngieKoehle/status/975824484409077760


Do auto accidents fall under criminal law in Arizona? In NY where I live, unless it's DUI or otherwise reckless and/or intentional in some ways, it's just a traffic violation. You could take the offender to civil court, but that's still crazy. In this particular case, I think Uber, if they are at fault, would have to pay arms and legs to get out of this.

While the woman's death is tragic, I hope this doesn't set us back on whatever progress made in autonomous driving.

EDIT: removed "While this a serious f*up by Uber and"


Not sure why you think that taking the offender to civil court is crazy? It happens literally every day. And it the case of death you're going to get major damages. Normally these cases settle, but like it or not many, many people sue over serious and fatal auto accidents.


>While this a serious f*up by Uber

How do you know that? From my initial reading of the police report, the woman was jaywalking and ran out into the middle of the street. I'm not even sure a human or autonomous driver would have been able to stop in time. The only tech that might have been able to is Waymo's since, based on the video they released, they have a radar lock on every pedestrian within a mile of the vehicle and they do predictive tracking to determine their position. Even then, it might have still not stopped in time.


Uber still has a responsibility to avoid hitting jaywalkers. I would be more sympathetic on a freway, but this seems negligent. If they can’t avoid hitting jaywalkers they need to keep testing in a less dangerous circumstance.


You're making the mistaken assumption that this accident could have been avoided at all. While autonomous vehicles should have a higher threshold for responsibility than human drivers, it's not possible to expect them to never be involved in accidents. For all we know, the jaywalker ran right out into the road.


I guess you are right. What I meant to say was it's a terrible PR for Uber, and a serious f*up if Uber was at fault.


If self driving cars are only as safe as human drivers, why bother with spending billions to achieve status quo?

Unless it is a pure capitalist move.


They're far better than human drivers but that depends on the system. In my personal opinion, based on the sensor video that Waymo release a few weeks ago, their self-driving tech is far more focused on safety than Uber's and their vehicles are likely far safer than a human driver.


have you watch the video provided by the Tempe police? The woman was jaywalking, but it was far from running out. This is something that Uber should have picked up.


Interesting question here: did Uber have authorization to operate automated in AZ? If not, and they had a human operator behind the wheel as a backup, is that human liable for manslaughter charges for not being in control of the vehicle?


  Did Uber have authorization to operated automated in AZ?
According my reading of AZ Executive Order #2018-04 [0], presuming they meet the guidelines established in provisions #2 and #3, they do:

2) Testing or operation of self-driving vehicles equipped with an automated driving system on public roads with, or without, a person present in the vehicle are required to follow all federal laws, Arizona State Statutes, Title 28 of the Arizona Revised Statutes, all regulations and policies set forth by the Arizona Department of Transportation, and this Order.

3) Testing or operation of vehicles on public roads that do not have a person present in the vehicle shall be allowed only if such vehicles are fully autonomous, provided that a person prior to commencing testing or operation of fully autonomous vehicles, has submitted a written statement to the Arizona Department of Transportation, or if already begun, has submitted a statement to the Arizona Department of Transportation within 60 days of the issuance of this Order acknowledging that:

a. Unless an exemption or waiver has been granted by the National Highway Traffic Safety Administration, the fully autonomous vehicle is equipped with an automated driving system that is in compliance with all applicable federal law and federal motor vehicle safety standards and bears the required certification label(s) including reference to any exemption granted under applicable federal law;

b. If a failure of the automated driving system occurs that renders that system unable to perform the entire dynamic driving task relevant to its intended operational design domain, the fully autonomous vehicle will achieve a minimal risk condition;

c. The fully autonomous vehicle is capable of complying with all applicable traffic and motor vehicle safety laws and regulations of the State of Arizona, and the person testing or operating the fully autonomous vehicle may be issued a traffic citation or other applicable penalty in the event the vehicle fails to comply with traffic and/or motor vehicle laws; and

d. The fully autonomous vehicle meets all applicable certificate, title registration, licensing and insurance requirements.

-----

[0] "Executive Order: 2018-04 - Advancing Autonomous Vehicle Testing And Operating; Prioritizing Public Safety" - https://azgovernor.gov/file/12514/download


Jesus Christ, man! This was a person, not a "negative externality."


And conservative politicians continue to make moves to erode tort law practice. See "See You In Court" by Thomas Geoghegan


Theoretically the reputation cost would discourage for-profit organizations from releasing unsafe self-driving vehicles for testing on public roads.

The ironic part here is that Uber already has a strong reputation for skirting the law and not caring. Their response and the following audits will be worth following.


> Theoretically the reputation cost would discourage for-profit organizations from releasing unsafe self-driving vehicles for testing on public roads.

This assumes that the reputation costs are comparable in weight to the acts that they commit. Based on the past year and what I hear about Uber's growth[0], I'd argue that their reputation and their valuation are quite uncorrelated, but it's hard to know the truth.

0: https://www.forbes.com/sites/miguelhelft/2017/08/23/despite-...


> This assumes that the reputation costs are comparable in weight to the acts that they commit.

Rather this assumes reputation costs times risk of having those costs while undertaking some action is comparable to gain from this action.


The problem for Uber is that if they lose this race, then there is no Uber. The outcome of this is that they will be willing to risk a lot in order to succeed. This seems to be collateral damage for them. Completely unsurprising as it seems they lied about running the red light in autonomous mode and the ran their program without proper licensing (and reporting). Nobody should be surprised that Uber was the first company to rack up an autonomous driving death.


>Theoretically the reputation cost would discourage for-profit organizations from releasing unsafe self-driving vehicles for testing on public roads.

And theoretically the only people who can regulate banks are the banks themselves.

Funny how you hear that argument every time a horrible industry is trying to keep profits high by eternalizing costs: meat packers in 1900, tobacco in 1950, oil and gas till today.


What if you already have a sh*t reputation like UBER?


I doubt it. If you exclusively kill people outside the vehicle who are walking or riding bikes, you encourage people who don't have a car to hire the vehicle.

The only possible costs would be if they're made to pay a price in a court of law, which seems unlikely since the woman who was killed had a bike with her, and it's unknown if she was on the bike at the time, but she's been described as a pedestrian outside of a crosswalk. Clearly Arizona authorities just don't care about safety.


> Theoretically the reputation cost would discourage for-profit organizations from releasing unsafe (...)

I wonder whether food companies (like the meat packing industry in Chicago) said the same thing before the FDA was created in 1906; i.e. that reputation will just regulate the industry to not sell mislabeled products, or spoiled or adulaterated food.


Really it's "socialize the costs" as well, considering that autonomous vehicle development was funded by taxpayers.[1] Like most high tech.

[1] https://en.wikipedia.org/wiki/DARPA_Grand_Challenge


The public spent a few million dollars on the Grand Challenge, while Waymo alone is worth 10's of billions of dollars.

https://www.cnbc.com/2017/05/23/alphabets-self-driving-waymo...

Self-driving car companies will have massive net positive externalities on the public, to the tune of hundreds of billions of dollars per year when fully deployed.


> The public spent a few million dollars on the Grand Challenge, while Waymo alone is worth 10's of billions of dollars.

False comparison. First of all DARPA has invested a lot more than that in all sorts of related AV technologies (e.g. LIDAR, AI, robotics). But the real question is when were those "few million dollars" invested. Seed investments generally are much smaller than what companies end up being worth.

In our high tech system, taxpayers generally take on the riskiest stage of very early development, where it takes billions of dollars in bets spread over a very large area over 10+ years. And you're right, there are some socialized benefits, but the statement stands that the costs are socialized as well.

Imagine if you told Google's earliest investor that their investment was a tiny percentage of what it's worth today, and they enjoy the benefit of using Google now, so it's fair that Google's later investors kept all the equity.


> First of all DARPA has invested a lot more than that in all sorts of related AV technologies

I was responding to your link about the Grand Challenge.

> Seed investments generally are much smaller than what companies end up being worth.

Of course. Are you really going to make someone else go point out the hundreds of millions in dollars of seed-level funding by Google, by Uber, etc., to demonstrate the obvious point that the DARPA Grand Challenge is terrible evidence for the claim "autonomous vehicle development was funded by taxpayers".

> so it's fair that Google's later investors kept all the equity.

People who invest in Google get to keep the equity they they purchase, but they don't get a claim to the equity of companies that exist because of Google's search product. Likewise, when the government invests in public goods, they don't then get to claim everything that is built on top of the public good.

Indeed, the government provides the rule of law, without which almost no modern economic development could take place. But that obviously doesn't mean the government has claim to all economic value. Likewise, none of us could work without eating, but that doesn't mean we owe all our income to farmers.


DARPA is waaay before what's called "seed" level funding in the commercial sector. They are the pre-pre-pre-seed. Taxpayers fund a lot of military procurement for nascent technologies as well.

Again you make a false comparison. Nobody claimed that early stage investors are entitled to "all economic value". The statement stands that costs are socialized. Silicon Valley is greatly subsidized by early stage taxpayer investment.

> Are you really going to make someone else go point out the hundreds of millions in dollars of seed-level funding by Google, by Uber, etc., to demonstrate the obvious point that the DARPA Grand Challenge is terrible evidence for the claim "autonomous vehicle development was funded by taxpayers".

"Autonomous vehicle development was funded by taxpayers" is a factually correct statement. You again fundamentally misunderstand the distinction of when investments are made vs. quantity of investment. Earlier stage investments are riskier; Google et al did not start pouring money in until the technology started showing some promise after many years of taxpayer investment. As is commonly the case with our high tech system.


> They are the pre-pre-pre-seed.

I guess we should credit Andrew Carnegie with pre^12-seed funding since he founded Carnegie-Mellon and a lot of relevant robotics research has taken place there.

Again, I am not discussing all of DARPA's activities, I am talking about the Grand Challenge you brought up, which I continue to maintain is minuscule compared to a million other sources and thus is terrible evidence that "autonomous vehicle development was funded by taxpayers" in any non-trivial sense.

> Nobody claimed that early stage investors are entitled to "all economic value".

You misinterpret my analogy. The point is that your approach, if taken seriously, would mean the government would have a claim on every single bit of economic value in the US, not that it would have full ownership of all of it.

> The statement stands that costs are socialized. > "Autonomous vehicle development was funded by taxpayers" is a factually correct statement

Ha, yes, and we can also conclude that autonomous vehicle development was funded by Carnegie, Roomba, and my buddy Alex who runs a robotic delivery startup.


Once again you have confused the timing and thus risk of investment with quantity. Furthermore, and trivially, donations by private citizens such as Carnegie are not a socialization of costs.

The simple fact remains that taxpayers have made significant and critical investments in nurturing AV technology, like many other technologies that Silicon Valley has commercialized once they bore fruit. Your fundamental premise that investments that are "minuscule" in size are necessarily minuscule in significance is trivially wrong.


You've repeatedly attributed multiple claims to me I'm not making. I can't tell how much of that is willful, but either way I won't continue the discussion.


>when the government invests in public goods, they don't then get to claim everything that is built on top of the public good.

If they used the right liscensing for the information that they created then they might.


Not if they are gasoline cars, particulate pollution alone kills tens of thousands.

An electric rail network on the other hand would have real positive externalities, but isn't as easily privitazable so won't be built.


Even electric cars aren't pollution free, and there are still other negative externalities (like health costs associated with car-centric life styles).


The latest EPA Tier 3 gasoline vehicles have extremely low particulate emissions. If we're talking about building new vehicles anyway then your fatality estimate is way too high.


You have no reliable basis for quantifying the net positive impact. The technology looks promising but at this point we don't even know whether it can be made to work reliably outside of a few limited areas.


Yeah, no.

A good example is pharmaceutical research. NIH budget is ~$30B and arguably a fraction of that is pure drug development.

The top pharma companies (ignoring VC investment in start ups) is over $70B.


That cost has long since been dwarfed by private investments. IMHO that was tax payer money well spent.


Self driving cars could potentially save tens of thousands of lives a year not to mention all of the other social benefits and quality of life improvements. No doubt whoever gets it working first will make a ton of money, but society will see a massive benefit as well.


It provides a benefit to the owners of such a company, as the labor costs are reduced. In other respects this is like the same nonsense about how we would live better lives and work less as our jobs were replaced by robots. It turns out that an economic "system" based upon dog-eat-dog principles will not allow such altruistic results to accrue for society. Like someone said above: "socialize the risks, privatize the profits"


I'm not quite sure how you think this will work. Is Google going to commission hitmen to kill 10s of thousands of people each year to make up for the deaths they prevent? Are they going to prevent those who can't drive now from using their cars? Or play irritating sounds in their cars to replace the stress of driving?


that's a strawman argument.

Automation will indeed do some harm to a certain group of people. The question is who should be responsible for bailing those people out (if at all). Is it society (aka, the govt), or the companies reaping the benefits of said automation?


Of course, economically, taxi drivers and others will lose while programmers/shareholders will win. The point is that for the 99% (or whatever) of us that aren't in either category, self driving cars will be a massive improvement to our lives. Especially to the 10s of thousands each year that won't die.


>Self driving cars could potentially save tens

The problem is that we should be 100% sure that there will be less death and not allow the self driving cars on public roads before.

I don't think we have the real numbers, how many Km each car drive and how many times the human had to get involved to prevent a big crash. If there are laws and checks in place so the actual numbers are reported then I want to see them, I read all self driving topics here and never seen this laws.

Until we have the real number the fact that self driving cars will save more lives is just a hope for a far away future.


"Could potentially save tens of thousands of lives a year" is not the same as "will save tens of thousands of lives a year right now". The fact that a technology has the potential for huge benefits does not give it a free pass to kill people while it's being developed.


Based on what? Where does your conjecture hold evidence that “society will see a massive benefit”?


I presume saving tens of thousands of lives is a benefit to society?


It's a bit of a wishful thinking now is it?

We don't really know if there will be any safety benefits at all, and while I believe we might get to that point, it's a bit farther away than some tech-giants would like us to believe.

If you listen closely to who say what, you'll notice that a lot of entrepreneurs claim that we will have full AI within a couple of years, while a lot of boring engineers huff and puff and are generally pessimistic.

I'm sure the there are great savings involved in self-driving transport - especially long-haul , so it will happen sooner than later, but this will initially mean more death - not less.


I'm not sure how wishful thinking it is. AFAIK no self driving car has been at fault in an accident. They have always been caused by a human driver and, at worst, the self driving car failed to avoid the collision.

For better or worse, I don't think a self driving car will be commercially successful until it shows it is at least as safe as a human driver. More likely it will have to demonstrate that it is significantly safer. It might be the case that whichever company first launches doesn't correctly gauge the safety of their car, but I definitely think the goal is to be safer from day one.


> AFAIK no self driving car has been at fault in an accident.

Nobody is saying the car itself was at fault. People are saying (justifiably) that the people who put the car out on public roads with faulty engineering are at fault.

> I don't think a self driving car will be commercially successful until it shows it is at least as safe as a human driver. More likely it will have to demonstrate that it is significantly safer.

I agree. And the incident under discussion illustrates that the technology has not yet reached that point.

Furthermore, "commercially successful" is not the first objective that needs to be met. The first objective is "safe enough to be allowed on public roads". The incident under discussion illustrates that the technology has not yet reached that point either.


My old comment from https://news.ycombinator.com/item?id=15076613

So much talk about LIDAR and other sensors. Why nobody talks about obvious idea of Road Object Message Bus? ROMB is a protocol where each road object (a traffic light, a sign, a car, a bicycle, etc.) transmits info about itself. A car could broadcast its direction vector, intention to turn, any non ROMB moving object it sees. A traffic light could broadcast current state and when it is going to change. That information would greatly enhance overall security, especially during rain and snow conditions, when even LIDAR fails.

Self-driving is such important (just after eliminating combustion engines) that we could upgrade existing cars with cheap ROMB boxes. Vehicle GPS tracking system costs about $30. ROMB box would cost about $60. Let's say that from 2027 all cars have to have a ROMB box to enter a downtown ...

Let's say your car ROMB received info about the white truck, while your car cameras and vision recognition systems see just a cloud and any truck in 100 m range.

ROMB purpose is not to replace cameras or LIDARs, but to extend gathered info.


Please don't paste old comments into new conversations. Especially not twice in the same thread!


I deleted the old comment because I thought it was connected to another deleted comment. You can check your web server logs and see that I deleted the old in the same time I posted the new one.

And I think that the comment is very relevant to the current discussion. It talks about how to increase the safety of self-driving cars.

Please answer why it is not relevant in your opinion. Because from my point of view it looks like you don't read it before downvoting.

BTW HN is not complaint with GDPR. After some time I cannot delete my comments.


The counterargument you propose is invalid because it would be DECADES until >90% of the entire current automotive market is converted to fully autonomous. MEANING cars currently on the road either being retrofitted or completely removed from road and replaced by an AV.

So we are all in a lose-lose. I'm anticipating severe backlash from Congress this week, if not today.


because it does not require handing over control to the car to get a good test. the cars can simply record their decision making process and immediately flag all exceptions, which includes hitting anything. It would be very interesting to see all exceptions because it is to be expected all these systems must fail.

we don't need active testing on our streets. automakers would have dared to test ABS or "safety" systems in such a manner for the simple reason of liability.


Could you please not use allcaps for emphasis in HN comments? This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.


The Washington Post has an updated story with the victim's name:

https://www.washingtonpost.com/news/dr-gridlock/wp/2018/03/1...

> Police said the vehicle was northbound on Curry Road when a woman, identified as 49-year-old Elaine Herzberg, crossing from the west side of street, was struck. She died at a hospital, the department said.

No mention of her being a bicyclist. My gut instinct is to accept that the victim was indeed a pedestrian -- with the assumption that they'd have that detail cleared up by the time they have her identity ready to release to the public.


I'd think so too, but there is literally a crumpled bicycle in the photo of the scene. Also the Uber has a dent on the right side, which faces east.

It's possible she was walking her bicycle across the street, but that still looks pretty bad for Uber. That street is a straight shot with clear visibility.


And if she was walking a bicycle, it was extremely unlikely that she "darted out". It takes some skill to do that.

(Still need more info, though.)


Yeah, the police clarified that she was walking her bike across the street: https://twitter.com/AngieKoehle/status/975824484409077760


> the vehicle was northbound...crossing from the west side of street

So, she approached from the left. That would make it significantly less likely that she just suddenly appeared in front of the car, right? It's not a narrow street.


What I'm saying is unlikely as I'd most likely assume she was traveling along the sidewalk for some distance before being hit but there is always the chance she was walking the bike across the grass and came out from that direction (where those bushes are, they seem tall enough from what I can see)? This is all assuming when you say 'left' you meant on the left side of the Google Street View posted above, though.


The location of the crash (from what I can tell): https://www.google.com/maps/place/N+Mill+Ave+%26+E+Curry+Rd,...


Close. Going by the article photo [0], the location was in front of these double-left turn signs, and the Uber vehicle would have been facing the other direction, i.e. northbound:

https://www.google.com/maps/place/N+Mill+Ave+%26+E+Curry+Rd,...

[0] The caption for this photo says the crack in the road represents "burned out flares at the location where a woman pedestrian was struck and killed". Not sure if that is exactly where she was hit, or if that's where her body landed, or if the flares were just near the location.

http://s3.reutersmedia.net/resources/r/?m=02&d=20180319&t=2&...


FTA "Elaine Herzberg, 49, was walking her bicycle outside the crosswalk".

The article says "as soon as she walked into the lane of traffic she was struck".

The police sergeant also "said he believed Herzberg may have been homeless".

These three things may together indicate that even a human driver might have struck and killed the lady if she entered into the roadway unsafely. I'm not sure how everyone (especially on Twitter) can be jumping all over Uber yet until the investigation is complete.

Sad of course, either way, but not yet enough evidence to assume that the algorithm+driver combo was any less safe than a human driver alone.


Finding any tidbits on what actually did happen has been very difficult. I get that the investigation is of course ongoing, but that's what I want to know - did the tech actually make a mistake here?

And a mistake in a broader sense - was it an avoidable accident? If it wasn't this barely feels like news, other than as a milestone.

Also... I don't see how her being homeless has anything to do with it.


Also... I don't see how her being homeless has anything to do with it.

I can think of a number of ways it may be relevant. In addition to what I said in this discussion, I left a comment about that here:

https://news.ycombinator.com/item?id=16625242


> Also... I don't see how her being homeless has anything to do with it.

well, higher than average rates of substance abuse in the homeless population. other than that I'm not sure how it could be relevant or why the article mentioned it. we wouldn't know without an autopsy but being drunk could explain walking out into traffic.


Homeless people tend to be in poor mental health and possibly under the influence of drugs/alcohol. All 3 of these things can contribute to bad judgement when she got hit.

It's unfortunate that this happened. But we do need take into context the situation and all factors involved.


The article says she died from her injuries some time later. So, she didn't die instantly on impact.

I know more than average about homelessness. A high percentage of homeless people have serious health problems. If she was homeless, she may have already been in very poor health. That may have contributed to her death.

I can think of potentially other contributing factors, but I am leery of coming across like I am badmouthing homeless people. They are subjected to enough hostility and prejudice and I am usually trying to advocate on their behalf.


Anecdote: About a year ago when I was driving at night I almost hit someone, apparently homeless with dark-colored clothes, pushing a covered shopping cart across a 5-lane street with no crosswalk. Fortunately I was in one of the middle lanes so I noticed them and slammed the brakes in time (while risking getting myself rear-ended).


Exactly. Given the information available at this point, the headline and most of the article is pure sensationalism. It could equally easily have been spun as "Homeless lady walks in front of car" but that won't get the clicks.


The second ever [1] fatal car crash involving a self-driving car. This is the first one where a third party is killed.

[1] https://www.theguardian.com/technology/2016/jul/01/tesla-dri...


Tesla was not a self driving car, please stop confusing this.

This case is in fact the first autonomous test vehicle caused fatality.

There were at least several Tesla autopilot related fatalities and injuries, but I would seriously not put those into self driving bag.


Can you provide a source? From all the articles I can find, the language is all pretty much the same:

"Tesla driver killed while using autopilot"

"The Tesla driver killed in the first known fatal crash involving a self-driving car"

"It's been nearly a year and a half since Joshua Brown became the first person to die in a car driving itself."

"Tesla bears some blame for self-driving crash death, feds say"


Tesla doesn’t have self driving capability, their autopilot is just driver assistance (like adaptive cruise control).


Driver assistance that takes care of everything (i.e. steering, braking, navigating, parking), that at the time the accident occurred didn't even require hands on the steering wheel?

Seems like an attempt to shift the blame off the autopilot system and onto the driver.


Could you please not use allcaps for emphasis in HN comments?

This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.


Sorry, I just pasted the text in the article.


There may be more than that. This one wasn't as heavily covered https://www.mercurynews.com/2016/09/14/second-fatal-crash-ma...


Interesting, apparently there's no way to know if autopilot was enabled or not.

> A Tesla spokesperson said the car in the Chinese wreck was badly damaged and unable to transmit data. “We therefore have no way of knowing whether or not Autopilot was engaged at the time of the crash,” the spokesperson said in a statement.


Require black boxes then for auto cars


Why not the same for human-driven cars? Human-driven cars kill thousands, if 2 victims of auto-driven cars make black boxes necessary, sure thousands of victims of human-driven ones do?


Tesla Autopilot != Self-driving(full autonomy)


It's terrible when anyone is killed in an automobile accident. The irony is in the promise of autonomous vehicles to prevent this very thing.

I've heard the general report, and I find it interesting that a supervisor was at the wheel, and they, as well as the car, failed to prevent the pedestrian from getting hit. I read they were not in a crosswalk and I'm sure the specific details of the accident matter greatly. It makes me wonder if a human driver with no autonomous system would have faired any better since someone paid to be alert and a car designed to be alert both failed to prevent the accident.

I'll reserve final judgment until more details are out.


It doesn’t matter how good the driver is, AI or human, if you unexpectedly walk right out in front of a car moving at 40mph it’s not going to be able to stop.


now the question is why is a car (autonomous or not) driving at 40mph on a road where people can be expected to walk right out in front of a car?

would you drive 40 mph down a city street with trucks and buses parked head to tail on both sides of the road?


In the early 20th century the automobile industry mobilized to downplay the deadly threat automobiles posed to pedestrians, and lobbied to create new laws to frame pedestrians as being at fault instead of car drivers (eg. jaywalking).

I expect we'll soon see tech companies do the same in order to favour their autonomous automobile businesses at the expense of pedestrians and cyclists.


This. There are many posts on this thread suggesting we could fix all this by lowering speed limits, etc. That's been known for a hundred years at this point.

If you want to get up on the history you can grab a copy of "Fighting Traffic," by Peter Norton. He goes through all the efforts cities made in the 1920's to limit cars, and how the motor lobby formed in opposition to this and little by little reversed the tables to the situation we find ourselves in today.

https://mitpress.mit.edu/books/fighting-traffic


Or shorter version https://www.researchgate.net/publication/236825193_Street_Ri... or podcast version https://99percentinvisible.org/episode/episode-76-the-modern...

Indeed it has been known for a long time. This is an opportunity to reverse bad decisions made a century ago, regardless of how quickly self-driving cars actually are adopted.

https://www.strongtowns.org/slowthecars/


Can we please not overreact and send the whole self-driving research down the drain?

What does the statistics say? How many miles have the self-driving cars driven, and how many deaths were they responsible for? How does it compare to a human driver?

When a million humans drive a car for a mile, and 1 of them results in a death, it's easy to pinpoint the blame on a "random drunk/distracted driver". How'bout start thinking of the self-driving software as a combination of all human drivers, with just a much smaller odds of having a drunk/sun-blinded/distracted driver than an average human driver.


> What does the statistics say? How many miles have the self-driving cars driven, and how many deaths were they responsible for? How does it compare to a human driver?

As of December 2017 Uber had driven 2 million autonomous miles[1]. Let's be generous and double that, so 4 million.

The NHTSA reports a fatality rate (including pedestrians, cyclists, and drivers) of 1.25 deaths per 100 million miles[2], twenty five times the distance Uber has driven.

You probably shouldn't extrapolate or infer anything between those two statistics, they're pretty meaningless because we don't have nearly enough data on self driving cars. But since you asked the question, that's the benchmark: 1.25 deaths per 100 million miles.

[1]: https://www.forbes.com/sites/bizcarson/2017/12/22/ubers-self... [2]: https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...


Scaling those numbers paints a poor picture for Uber. Assuming 3 million total miles autonomously driven thus far from Uber's program:

- Uber autonomous: 33.3 deaths per 100 million miles

- Waymo: 0 deaths per 100 million miles

- National average: 1.25 deaths per 100 million miles

Of course, the Uber and Waymo numbers are from a small sample size.

But there's also the bayesian prior that Uber has been grossly negligent and reckless in other aspects of their business, in addition to reports that their self-driving cars have had tons of other blatant issues like running red lights.

It seems reasonably possible that an Uber self-driving car is about as safe as a drunk driver. DUIs send people to jail - what's the punishment for Uber?


Scaling those numbers is not useful and in fact reduces usefulness.

Comically, that’s why OP said not to do that.

Comparing dissimilar things is actually worse than not comparing at all since it will increase the likelihood of some decision resulting from the false comparison.


The goal is to use the best set of information available to us. I merely cited the normalized numbers because it's been asked various times in this thread - questions along the lines of "how does this rate compare with human drivers?"

The purpose of the extrapolation was to get a (flawed) approximation to that answer. By itself, it doesn't say much, but all we can do is parse the data points available to us:

- Uber's death rate after approximately 3 million self-driven miles is significantly higher than the national average, and probably comparable to drunk drivers.

- Public reporting around the Uber's self-driving program suggests a myriad of egregious issues - such as running red lights.

- The company has not obeyed self-driving regulations in the past, in part because they were unwilling to report "disengagements" to the public record.

- The company has a history of an outlier level of negligence and recklessness in other areas - for example, sexual harassment.


But this is precisely why you should simply extrapolate. Of course people ask, and of course the answer will be useful. But extrapolating one figure of 3M miles to a typical measure (per 100M) is not useful because it provides no actionable information.

Providing this likely wrong number anchors a value in people’s minds.

It’s actually worse than saying “we don’t know the rate compared to human drivers because there’s not enough miles driven.”

Your other points are valid but don’t excuse poor data methods hygiene.

Even now you are making baseless data on its face because you don’t know the human fatality rate per 3M enough to say is “significantly higher.” Although I think it’s easier to find enough data from the human driver data to match similar samples to Uber. But dividing by 33 is not sufficient to support your statement.

I haven’t seen data on the public reporting. That seems interesting and would appreciate it if you can link to it.


> the self-driving car was, in fact, driving itself when it barreled through the red light, according to two Uber employees, who spoke on the condition of anonymity because they signed nondisclosure agreements with the company, and internal Uber documents viewed by The New York Times. All told, the mapping programs used by Uber’s cars failed to recognize six traffic lights in the San Francisco area. “In this case, the car went through a red light,” the documents said.

https://www.nytimes.com/2017/02/24/technology/anthony-levand...


It depends on what question you're trying to answer with the data (however incomplete one might view it).

Is the data sufficient to say if Uber might eventually arrive at a usable self driving vehicle. Plainly no. It's not sufficient to answer this question one way or another.

Is the data sufficient to indicate if Uber is responsible enough to operate an automated test vehicle program on public roads. Maybe.

There still needs to be an investigation of cause, but if the cause is in a autopilot failure, or the testing protocols preventing a failing autopilot from harming the public, then the question is what the remedy should be.


I agree there should be a investigation.

I agree that you have to use data available to make the best decision possible.

There may be methods to account with all of the problems of comparing two different measures, but it requires a lot of explanation.

But extrapolating one measure into another is wrong without those caveats. That’s the comment I replied to. So in no situation would the method I replied to be useful for what reasonable question is asked.


I think it's very relevant. If the testing protocols are insufficient to prevent an avoidable accident within an outer bound of accident rates. If it is a clear data point outside those bounds (even with uncertainty) one could make a case to severely limit or ban Uber's testing on public roads, and require that they demonstrate sufficient maturity of testing procedures and data to be allowed back onto the roads. This as opposed for waiting for another 'data point' (death).


We absolutely should extrapolate something from those statistics.

Let's assume that the chance of killing in any two intervals of of the same number of miles traveled is the same. Let's say that the threshold for self driving cars being "good enough" is the same death rate as human drivers.

If we assume Uber is good enough, then they should kill people at a rate of at most 1.25/100,000,000. The waiting time until they first kill someone should fit an exponential distribution. The probability that a death would occur in the first t miles is 1 - e^(-lambda t) where lambda is the rate of killing people, is 1.25/100,000,000. I.e. 1 - e^( -(1.25 / 100,000,000) x 4), which is 5 x 10^-8.

If Uber has only a 5 x 10^-8 probability of driving "safely enough" they should lose their license at the very least.

Edit: Oops, 4 != 4,000,000. It should be 1 - e^( -(1.25 / 100,000,000) x 4,000,000) which is about 0.049...

Still, I think we can ask for better than a 5% chance of being the same as a human driver.

(also replaced stars with 'x' because HN was making things italic)


Also, probably 95 percent of the autonomous miles are under the easiest conditions, a sunny day between 9-5 because most are at the Arizona/California test centers.


1.25 per 100 million miles is almost certainly a bad benchmark since the majority of those miles are interstate miles. Fatality rate per mile of urban driving would be much better, although I'm not really sure whether I would expect that number to be higher or lower.

Edit: Actually, maybe I'm wrong in assuming (a) the majority of miles driven are interstate miles, or (b) that the majority of existing miles logged by self-driving cars have not been on the interstate. Would love to see some data if anyone has it, although I suspect Google, Uber, et al. are reluctant to share data at this point.


If the accident had happened with the autonomous vehicle of ANY company, we would still be talking about this and estimating the number of deaths per 100 million miles.

Therefore, I think it would be more fair to consider all miles run by all autonomous vehicles all over the world in the denominator.

It is for the same reason that we want to consider all miles driven everywhere, not just those in Arizona.


> As of December 2017 Uber had driven 2 million autonomous miles[1]. Let's be generous and double that, so 4 million.

How often did the human have to take control?


Note that the statistics we have to work with are relatively terrible. For example: Waymo's favorite statistic is "x miles driven", which is a terrible/useless statistic, because it treats all miles equally, and fails to note that the most complex driving is often short distances (intersections, merges, etc.) and doesn't account for the fact that most of those miles were repeatedly driving on a very small number of roads. But it looks good on marketing copy because it's a big number.

Additionally, our self-driving car statistics we tend to see today also tries to ignore the presence of test drivers and how frequently they intervene. As long as they can intervene, the safety record of self-driving cars is being inflated by the fact that there's a second decisionmaker in the vehicle.

EDIT: Oh, and human driving statistics are also bad because a lot of accidents don't even get reported, and then when they do get reported, a lot of it is through different insurance companies, that's before we get into nobody centrally tracking "miles driven", and that's why most statistics for human driving safety are more or less just an educated guess.


I think it’s fine to attribute miles to autonomous vehicles which have drivers that can intervene... as long as that is how they are used outside of tests as well.

Just a guess, but I doubt having a test driver that can intervene will help safety statistics for autonomous vehicles much. I think we’ll find test drivers will usually be unable to notice and react fast enough when they are needed.


My biggest concern is that test drivers can and have done the 'hard parts' of a lot of test routes, making the overall miles driven statistic kind of useless as a representation of the driving ability of the car.

But yeah, I'd agree there's a lot of difficulties expecting a test driver to immediately take over in a sudden event like a bike entering the roadway.


I agree the statistics we have are pretty terrible. However, in this case, I think Waymo's statistic is actually quite useful. It's likely that Waymo's x miles driven statistic is largely driven by the fact that Waymo has tested their cars on a small number of roads, in fairly safe settings. But that paints Uber in an even worse light. Waymo is supposedly ahead, or at least on par with Uber in self driving technology, and they have chosen to limit their testing and driving to safer and a limited number of roads. Uber has not. That seems to underscore the fact that Uber has pushed beyond their tech's capabilities even though their competitors have determined the tech isn't there yet.

Also, if someone had posted a poll a day ago as to which company's self driving cars were likely to be the first to kill somebody, I think the vast majority of people would have predicted Uber. I don't think that's a coincidence.


Note that Waymo was the loudest about how they shouldn't be forced to have a steering wheel in their cars, already, and that they're also massively ramping up in Arizona, because of the nonexistent regulations. In Arizona, Waymo won't be forced to disclose statistics on disengagements, for example, which California does require they hand over.


Waymo is already using autonomous vehicles in AZ without a safety driver behind the wheel. There are fully autonomous Waymo vehicles driving around in parts of AZ.


Which is why I am only a little bit surprised Uber beat Waymo to killing a pedestrian. Waymo is way too arrogant about it's capabilities, moving way faster than is reasonable or safe, and they use misleading statistics to insinuate their system is more capable than it is.

Note that they already know their cars will need help driving still, which is why they've assembled a call center for remote drivers to take over their cars in Arizona. Of course, those remote drivers likely can't intervene at nearly the speed of an onboard safety driver.


We don't really have any statistics on autonomous car deaths yet. One fatality is only one data point; that's not _nearly_ enough information to come to any solid conclusion on the overall safety of the technology. (Not to mention the fact that a failure of any one particular implementation of self-driving car tech doesn't necessarily mean the other implementations are similarly unsafe.)


You know how sometimes you get a gut feeling or your awareness picks up somehow?

You experience it sometimes where you just happen to glance in the right direction, as a driver, and avoid horrible things from happening. Or hesitate to go through an intersection when a light turns green and some person is running a red. You and your body somehow knew but it can't be explained.

Computers don't have that, whatever it is.


The thing you're describing is coincidence/confirmation bias, not a real phenomenon.


I completely agree with you, but also have an anecdote that argues for the parent's view.

I was driving a long roadtrip from Texas to California, split into a couple segments over several days. At one point, my adrenaline and heart rate suddenly spiked. I felt freaked out but could not see a reason for it. I checked all my mirrors, traffic was busy but seemed to be moving along normally. A few moments later my vehicle was rocked by a semi-truck blowing past, traveling much faster than surrounding traffic, and missing my vehicle by what seemed like an inch.

The roadway was curved slightly, so I think the semi was in a blind spot when I was actively searching for the problem.

It's interesting that a subconscious process could alert me to a problem, in this case it didn't help me resolve it, but at least I was alert and looking. It had never happened before so there was a bit of confusion as well (why am I suddenly freaking out?) - but now I know how to pay attention if that feeling happens again.


Except that, in your example, an autonomous vehicle would already have tracked the truck (they can track nearly a mile in 360 degrees) and would have no need for that panic response. The OP is kinda pointless.


Maybe you haven't had it happen then to know what I'm talking about. Never had a gut instinct or a bad feeling that becomes true?

Sure maybe it happens and there is nothing really going on but my point is there is something about our sub-conscious that cannot be implemented with computers.


Part of what he's describing is coincidence/confirmation bias, but another part of it is probably some sort of cognition that happens at a level of the mind the driver is not conscious of.


The statistics may prove meaningless. It's the emotional reaction of our legislators. Take the recent legislation regarding a single killed dog by United Airlines (my condolences to the family members/owners of that dog) vs. deaths via [insert any other thing that has multiple deaths and is currently under regulated].


No, the time to overact is now, before millions of these get on the road.

Actually, the time to "overeact" was even earlier, but most of self-driving cars ignored any criticism. So now you have stupid car companies cutting corners and killing people so they can be "first" on the market or whatever.


What if you already thought from the beginning that cars, including human-driven ones, and their infrastructure, were always a huge boondoggle (the grossest misallocation of resources in human history, as Jim Kunstler calls it), and an instrument that posits itself as the solution to a problem it caused (not unlike an addictive drug), and that meanwhile serves chiefly to centralize wealth and rend the social fabric?

Please petulantly downvote all minority opinions!


According to the latest info, it doesn't look like the system was at fault at all. Seem like the likely homeless woman abruptly stepped into traffic. Every accident still of-course is terrible, but some maybe are unavoidable. https://www.sfchronicle.com/business/article/Exclusive-Tempe...


I know this is a slightly insensitive question to ask right now, but assuming that what happened was that the car tried to turn right and the cyclist came from behind and crossed into its path[0]: Who would actually have been in the wrong?

I cycle along paths like this from time to time myself, and I always assume that I should let a turning car pass in front of me. As soon as I see the turn signal I will either fall back or even pass into the car lane to overtake on the other side.

Of course that's in part because I know I would lose the fight anyway, but also because I think I am not actually supposed to be "overtaking" them on the right side, so they have the right of way before me.

What do the rules say in this case?

[0] This is based on this link posted elsewhere: https://www.google.com/maps/@33.4370667,-111.9430321,3a,75y,...


Depending on the state law, the right-turning car probably is supposed to yield to a cyclist in the bike lane going straight. But of course, every experienced cyclist knows that what is right is not what is safe; you should aim to travel just behind the car's rear bumper to give yourself time to stop if they swerve right.

This is the insanity of (most) bike lanes, that you have a lane to the right of a right-turn lane that can go straight. And that's why I don't use most bike lanes. They're trouble.


That move is against the law, at least in San Francisco. The vehicle should enter the bike lane to turn right.


These things tend to be based on the factual evidence of the case, as well as local and state laws. Talk to your lawyer.


Not to dismiss the tragedy of this incident, but it should be expected that self-driving cars kill pedestrians -- just at a rate lower than what's expected from human drivers.

Perhaps there's a better metric to look at, but I'd like to see number of deaths caused per miles driven.

If Uber's self-driving cars are killing more pedestrians than a human driver would, we have a huge problem, but I'd be willing to bet they're at least an order of magnitude safer in this respect.


You would lose that bet. As another comment pointed out[1]:

> The NHTSA reports a fatality rate of 1.25 deaths per 100 million miles[2], twenty five times the [4 million miles] Uber has driven.

So they should have driven around 75 million more miles before getting their first fatality, in order to remain even with humans. Not to mention they've been driving on the clearest/sunniest roads in only a few cities.

Of course a sample size of one is not enough data, but I'd say we should err on the side of "we have a huge problem".

There should have been so many safety precautions in place that nobody should have died from this yet.

[1] https://news.ycombinator.com/item?id=16620736


It'll likely never get passed, but we truly need an "acceptable death" metric to be passed to protect the companies (and apparently people too, since some in these comments what the developers held responsible) from deaths. In my opinion, a law should be passed that per mile driven, 5-10% of the current deaths per mile is allowed, and it should begin decreasing after 10 or so years to something low (maybe 1%, maybe .01%, we'd have to wait and see).

People will always make mistakes, the benefit with self driving cars however is that mistake should only be made once. Whatever bug caused it to be considered at fault can be patched and will never occur in that exact circumstance again. Meanwhile with humans you could have that same case occur over and over again.

With how much money people can win in lawsuits nowadays, all it would take is a handful of cases to totally destroy a manufacturer of self driving cars.


In general, I like having a diverse set of options available in terms of software implementations (eg: multiple C++ compilers to choose from). It does worry me a little bit, however, that so many companies are trying to implement their own self-driving cars from scratch. If you just look at how buggy new OS releases tend to be, how many blatantly obvious bugs you run into every month, it's clear that many software development teams don't properly test their stuff. Some companies, like uber here, will act irresponsibly and fuck up, resulting in the loss of human lives.

I would guess it's inevitable that in the medium/long-term, there will only be one or two companies developing and selling self-driving tech to every car manufacturer. Small players won't have enough data, everyone will be too afraid of the risk, the amount of government regulations surrounding self-driving cars will increase. More regulations will protect people but it will also mean it's much harder to start gathering the data you need and testing your product.


That sounds like the market correcting itself, while you want to protect the likes of Uber from market forces, with new laws. Putting that aside though, good luck trying to get re-elected after passing a law like that. Few people are technocratic and emotionally detached enough to agree with you, from the looks of it, not even on this site.


How on earth is a lawsuit a market force?

Did you even read the first 5 words?

>It'll likely never get passed

Can you stop trolling my posts? You aren't adding much of anything.


There is no data available at the moment to draw conclusions that self-driving cars bring less fatalities, and rightly so, because they have not been tested on a beta stage. You are only guessing and hoping.


There is certainly at least SOME data. At the very least, let's find out how many miles Uber's self-driving program has driven. Then, we can, at least, compare deaths per miles driven to a human driver.


There is, and the data doesn't look good.

There's typically 1 death per 100 million miles driven.

Uber's only driven a couple million miles.


Human drivers kill humans at a rate of 1 per 100 million miles. Collisions occur at a rate of like 1 per 200k miles.

AI drivers have killed 1 human having driven a mere ~5 million miles (not even counting Tesla autopilot). Disengagements occur at a rate of 1 per 5k miles.

The evidence strongly, strongly points to self-driving cars being more dangerous than humans today.


Fewer people will die (probably). The problem is that different people will die, and you can't celebrate with the families of the people who were slated to die and didn't.


And different people will "kill". It won't be the drunk drivers, or the road ragers, or the people texting on phones, it will be anyone unfortunate enough to be in the car at the time. A businessman. A politician. A cop. A teacher.

And to say that "it's not their fault" and "it could have happened to anyone" will be cold comfort to those who just watched their car mow down an innocent, and there was nothing they could do to stop it.


That’s an excellent point, one I hadn’t considered. Of course any number of accidents happen today to people whose only mistake is not being prescient, or at least not driving sufficiently paranoid to avoid something that arises unexpectedly.


Whilst I imagine it is horrible to be in a car that drives someone over, I don't think the driver deserves the brunt of the sympathy in that situation.


Recognizing that being a witness to a violent death, especially if its your vehicle (despite not being in control), has a significant impact on people takes nothing away from the victim's family.

Sympathy is not a zero sum game.


Not the lion's share, but it could happen to anyone, and I have to imagine the psychological impact never truly goes away.


Certainly this occurs today. "Tomorrow" it simply becomes the norm.


It's not that simple. Who is going to take responsibility for such incidents? The company that owns the car? The person behind the wheel (if any)? Or the engineer that designed the algorithm?


I would think it will be pretty standard product liability. With some assumptions (proper maintenance, software updates, etc., used in accordance with directions), it's hard to see liability for deaths and injuries not resting with the manufacturer. What's unusual for a consumer product is that, with the somewhat exception of pharmaceuticals, there aren't a lot of examples of consumer products that routinely kill people even when properly used.


Cigarettes?


Well, yes, cigarettes and alcohol (the latter in excess). Or even sugary drinks in excess. But there was, of course, a huge settlement in the case of cigarettes and the dangers are fairly well understood. There also would seem to be a qualitative difference between links to health problems/mortality over the course of years and failures that lead to someone's immediate death.


I'm pretty sure you don't want to compare right now. Self-driving cars are just too new.

I think the rate of overall vehicle crash deaths in the U.S. is less than 2 per 100 Million vehicle miles. Pedestrian traffic is a fraction of that.


Self driving cars are one of the things where we absolutely cannot afford to do "move fast and break things".

If there was any whiff of shortcutting, poeple need to go to jail and Uber should be bankrupted by penalties and lawsuits.


The last time I heard this, the first thing that came to my mind was "I sure hope Facebook isn't going into self-driving cars."

And then I learned Facebook is going into self-driving cars.

Sigh.


Devil's Advocate argument here, but have you read books on our space race in the 60s? Should NASA have been bankrupted by penalties and lawsuits when Apollo 1 had its accident?


If they were dropping rockets on people standing on street corners. Yes, 100%. Your example is different because it was deaths of people involved in the projects who knew there were risks and agreed to them.


My understanding, from talking to someone who works for the NTSB, is that the software controlling autonomous vehicles is currently completely unregulated, and that companies like Uber, Tesla, etc. have refused to provide access to their software for external review.

Recently I've seen a lot of comments defending autonomous vehicles from a statistical standpoint, but even if these cars have the potential to be safer on average, it scares me to think that the software driving a car is less regulated than its airbags or seatbelts. Especially considering the auto-industry's tenancy to ignore safety issues without external pressure [0].

Is my information outdated? Have there been efforts to review or regulate autonomous driving software in the last year?

[0] https://en.wikipedia.org/wiki/Unsafe_at_Any_Speed


Unless this was a situation that was impossible to anticipate and avoid, this looks bad. It would mean that both, the autonomous driving system and the safety driver failed.

Not hitting pedestrians should be close to the top in priority, so while failures of the software are certainly expected during this kind of test, this is one of the worst kinds of failures that should have received a lot of attention before ever putting the vehicles on a public road.

The safety driver failing makes me wonder how well qualified they are, or if there is an issue with staying alert for long times without actually driving the car.


The video says bicyclist and shows a bent bicycle. So pedestrian or bicyclist?


I've heard that police reports often refer to all non-drivers as "pedestrians", even if they were riding a bike (which I learned after a neighbor was killed while riding a bike). I don't know if this is legalese or police jargon, but either way you would hope a journalist who covers these things could translate.


Unfortunately, many police forces don't train the police on how to handle bicycles. For example, in many places riding on the sidewalk is illegal, yet police will suggest a cyclist get off the road and ride on the sidewalk.

From studying many bicycle-related police reports, I was dismayed at how often I read the phrase "a pedestrian riding a bicycle."


Riding on sidewalk is often safer for everyone, though. And, in some places I've lived, it's only illegal in specific marked areas of high foot-traffic. The police maybe shouldn't be giving advice that is technically illegal (if it was), but it may have been pragmatic advice, at least.

Personally, as a biker, I get slightly annoyed by pedestrians who seem to think I have little control or awareness. I ride on the road where practical, and, if on sidewalk, I go much more slowly and sometimes just get off and walk if the area calls for it. Generally, bikers are much more aware of their surroundings than drivers (both out of self-preservation and an unobstructed, elevated view).

Of course, the best solution is fully separated paths for walkers, bikers, and cars. And, in this instance, it looks like there is ample room for building that, with no street parking to complicate things (and the sidewalk also looks wide and sparsely used).


The article also mentions the pedestrian as "a woman walking", so it's not clear whether they are referring to a cyclist as a pedestrian, or whether they are getting different information and actually think it was a pedestrian. Very confusing


The video was released. Turns out it was a woman walking her bike.

Sad event, but it's likely a human driver would have killed her as well. She was on the road at night without lights, jaywalking without attention to the on-coming traffic.

One could argue the car was driving faster than it's headlight distance safely allowed. If so, humans are going to be quite frustrated with how slow autonomous cars drive at night.


A police officer in Nashville shouted at my wife for riding her bike in the road in accordance with Nashville law. Basically, he didn't know the law he was enforcing.


When I lived in California the press reported pedestrian and bicyclist deaths under the same "pedestrian" moniker.


That's dumb. The term means someone who is walking. "Per pedes" means "by foot".


It would explain the "outside a crosswalk" remark better if the person were riding the bike. This is one of those cases where video evidence should be subpoenaed, though. It's far too common for the police to accept the statement from the only party still alive, and come up with incorrect conclusions about what happened on the street.


Seems very unlikely to me that this case will be brushed off without police looking at video.


Perhaps they were walking their bicycle across the street.


What happens when a self driving car in the future from a service is driving empty back to some parking? And hits someone? Does it call ambulance or police?


... this would make from some creepy film scene - someone shouting for help and the car only watching.


several conventional luxury cars already have this feature - they detect crashes and call emergency services.


While incredibly sad, this incident is not terribly surprising because the technology is nowhere near safe enough.

Unfortunately, this will inevitably set the field back by at least 10 years with stricter regulations. We've seen the same with pharma and aviation, where innovation was slowed drastically and is only realistic for big companies. This is especially true here because if self-driving vehicles get banned from public roads, companies like Waymo will be forced to develop bigger private "test tracks" that approximate cities to have any hope of building deployable technology, and they will face the same fundamental limitation – the data will come from an approximation/model and not the actual location with real-life scenarios.

Although there's no doubt that regulations are crucial to prevent such mishaps, I do hope lawmakers can find the right equilibrium on the tradeoff between safety and innovation.


Statistically speaking, I am curious to know the data on these areas: - number of miles driven per fatal accident on self-driving - number of miles driven per fatal accident on a human driver.

It is important to know whether our machine is now doing worse/better than our counter-part and this means a lot for our self-driving initiatives.


1.15 per hundred million vehicle miles traveled by humans [1]. Uber only reached a million miles in September, with Waymo a little ahead at four million [2].

Note that this is far too little data from which to draw any conclusions, since the sample size for fatalities involving self driving cars is one and the number of miles driven is many orders of magnitude less.

[1] http://www.iihs.org/iihs/topics/t/general-statistics/fatalit...

[2] https://www.theverge.com/2017/11/28/16709104/waymo-self-driv...


The numbers that I have seen indicate that humans have a fatal accident roughly once per hundred million miles, while we have one fatal accident for a self-driving car, with somewhere around ten million miles driven across all self-driving cars.

I've heard Uber is around 3 million self-driving miles.

So Uber would be 30x worse than humans.


Good to know! But I am more curious for the data cover every self-driving vehicles, not just about Uber. It could be Uber is doing worse, but my ask is whether we (as an industry) is still on a right direction for this "bold bet".


Interesting analysis, but the data seems a little sparse to draw this conclusion.


We also don't know how many fatal crashes were avoided due to human intervention (Uber has people inside the cars). It is almost certainly a whole lot worse than 30x at this stage.


This was inevitable. Some of the developers for self-driving vehicles are immensely careful, some are not. And the regulatory oversight is currently not sufficient to allow the one to operate but not the other.


The reason self-driving car companies have flocked to Arizona is that Arizona chose to pretty much completely deregulate them: They don't even have to report statistics to the state like they do in California.

It's unfortunate, but unsurprising, the first pedestrian killed by an experimental self-driving car allowed on public roads was in Arizona.


What will the consequences of this person’s killing be? Will someone lose out on a promotion, or miss their performance bonus?

We need to discuss how the developers self-driving cars will be held accountable for the crimes they commit. There is no reason the person who programs or the person who makes money from a self-driving car should be held less accountable for a crime that if committed directly by a person would almost certainly result in jail time. You can’t replace yourself with a computer program and then choose to take only the benefits and not the responsibilities.


We already have plenty of case law and policy for cases where people are killed by mechanical equipment operated by businesses, and the type of penalties and compensation are appropriately different if the cause was negligence, malice, or impossible-to-eliminate fluke events. (Generally in the latter case, compensation is due but there are no criminal charges.) Obviously there will be policy adjustments and clarifications for the case of self-driving cars, but I don't think there's reason to think we can't apply normal and existing legal principles here.


There is a massive difference in terms of scale and choice (FWIW). Industrial automation is most likely to kill you if you work in the plant. The person who died here was a random pedestrian. If these cars were restricted to special areas the analogy might make more sense, but I don’t expect to be dealing with a self-driving car or an industrial robot when I step outside my front door.

Moreover it is not clear to me that not holding the companies that create industrial robots that kill people criminally responsible is what most people would consider just. Again, I think it’s just that there is a massive difference in the scale of exposure; there were not enough interested people to have a debate.


cars are already machines built by companies which sometimes malfunction and kill people (both drivers of the vehicles and people around them), this is just a new way in which they can malfunction, I don't think it's as dramatically different as you're saying


You’re right that there are already ways in which non-self-driving cars can malfunction. But previously we held human drivers responsible for certain kinds of accidents. For these same kinds of accidents we now propose holding no one responsible. That seems to be the dramatic change to me.

We have held humans responsible because assuming a correctly functioning car they are performing the most complex and risky task, and are most able to cause problems. Likewise self-driving car software performs a complex and risky task in which failure can have serious consequences.


There's already such a thing as a no-fault collision. There's also already such a thing as a collision where the manufacturer is at fault. I feel like this stuff is all covered in driver's ed.


And there is such a thing as an at-fault collision. Is what you are saying supposed to be a contradiction? Also, I have a license and drive regularly; I don’t see how your strange assertion I must not is productive.


Right now we have something of a two tier level of liability which would for the most part work fine with automated vehicles. The primary liability falls on the owner/operator, who usually carries insurance. The owner/operator has some level of self interest in maintenance of the vehicle - otherwise an automated vehicle might have a perfect design, but the maintainer never changes the brake pads or operates with the tires worn, etc. If the insurance company finds that some model vehicle has reason to doubt it's design integrity, then that liability may be passed on in a separate case to the manufacturer. An individual owner is actually in a poor position to know systematically if there is reason to bring suit over a subtle design or manufacturing defect, but an auto insurance company has both data and the resources to see and react to defects.


> If these cars were restricted to special areas the analogy might make more sense, but I don’t expect to be dealing with a self-driving car or an industrial robot when I step outside my front door.

As a pedestrian you already run a significant risk of being killed by a car. To the extent that we hold autonomous car makers responsible for these deaths (and I'm not saying we shouldn't), we should hold non-autonomous car makers responsible for the deaths their vehicles cause as well.


We do hold non-self-driving car makers responsible for bad manufacturing. But in accidents not due to manufacturing we primarily hold the human drivers responsible. I agree with you overall, but the problem is that people seem overeager to hold no one responsible at all, sometimes based solely on a blind faith that self-driving cars will be safer than humans soon, and that the deaths along the way are just the price we will have to pay—as if there is no other option between no self driving cars at all, and the “move fast and break things” attitude that here resulted in a person’s death.


> and the “move fast and break things” attitude that here resulted in a person’s death

Slow your roll. Nobody know why this person died yet.


The thing to remember is that limiting self-driving cars is not safe either. Human-driven cars kill thousands of people every day; a policy that saved this person's life but set back self-driving car development by even (say) a month might well do more harm than good.


lmm the data does not support your claim, see gpm's comment above.


Airplane (and car, for that matter) malfunctions can already kill travelers. Why not apply existing principles from those types of cases?


Because those vehicles have licensed human operators. The malfunctions may be to blame on the manufacturer, but are also licensed and regulated. The cars have to pass certain crash test standards for example.

In this case, the operator was an AI that was negligent and it was unlicensed/unregulated. That's a new scenario. In the human case a person might go to jail for negligent vehicular manslaughter. What does 2 years of jail time look like to an AI? What does a suspended license look like to an unlicensed entity?


I’m specifically talking about the case where the operator is not at fault.


For choice: manufacturer failures happen with normal cars, and you risk that every time you step outside your door. Likewise with building failures, construction accidents, etc.

For scale: the risk of death from a self driving car will probably be less than the current risk of death from normal cars, and will definitely be less than the risks incurred in the 20th century from cars, buildings, etc.

Self-driving cars are definitely a new and large legal development, but there's no reason to think existing legal principles can't handle them.


No, this is not equivalent to the risk of existing manufacturing defects in cars. Car bodies undergo safety tests by the government; the software for these self-driving cars is being tested on public streets. Same with buildings, which must be inspected.

As the GP states, the entire reason Uber is testing in Arizona is because their state government completely got rid of reporting regulations which were present in CA; the status quo is decidedly not the same as it is for established technologies.

As for scale, look at the other comments where people analyze the risk posed by self driving cars. Your assumption that the risk of death from self-driving cars is less is not backed up by the evidence.

It’s fine to say that self-driving cars might eventually be better drivers than humans, just like robots might eventually be better at conversing than humans.

There is no reason self-driving cars can’t be be tested in private. Uber can hire pedestrians to interact with them—I don’t volunteer to be their test subject by deciding to take a walk.


First you started by claiming the difference was due to scale and choice. You're now retreating to a third distinction: the difference between established technology and experimental technology. Well, all established technology was experimental technology at one point, and it was not uniformly regulated. We could play this game all day.

Self-driving cars are a new and important industrial development that will require adjustments to policy. They don't require revolutionary new legal principles.


This is crazy. The developers likely have no say in where and when the cars go out on public roads. That's obviously a decision for someone higher up in the company.

The executives should be held accountable, not the developers.


I disagree, you're just passing the buck. Accountability needs to be had at all levels. If an engineer writes a bug into code like this (deliberate or not) and such a bug results in somebody's death, the engineer should be held accountable just as much as the person who approved its release. The executive could just as easily say "my engineers promised me it was fully tested", etc. Engineers could say "yep it was, but that was an edge case we missed" or something like that. In any case, there needs to be shared accountability. Maybe execs take the brunt, but engineers should not be allowed to write code that kills people (inadvertently or otherwise) and face zero consequences.


What software developer would ever sign on to a project where they could be held criminally liable for a single bug?

Do you want software development to turn into healthcare, where every developer needs millions of dollars of malpractice insurance? Because shit like this will turn it into a healthcare like system real quick.


Criminal liability is a different situation as there are very few industries with specific criminal liabilities (finance maybe).

But there are many industries where civil liabilities are required. In fact, any software independent consultant is civilly liable for their work, but it’s not specific to software.

IEEE has a section in their member toolkit that goes into why professional liability insurance is needed, https://m.ieee.org/membership_services/membership/discounts/...

The costs aren’t that high or at least they weren’t 15 years ago when I purchased it for less than $1k/year for $1M in coverage. Most people need this even if they think they are safe. If you’re the one who wrote the deployment script that erased $1M in data, it won’t be entirely mitigated that the script made it through qa.

Also interesting is that the engineer who wrote the Uber software is currently liable for criminal negligence, like pretty much everyone else. But you would have to prove culpability. I can’t find any examples of software engineers convicted so it’s hard to tell who goes to jail-developer, qa, or executive.

More info on criminal/civil negligence- https://www.theblanchlawfirm.com/?practice-areas=criminal-ne...


Nobody in their right mind would work with such liability without insurance, which is all well and good for civil liability, but insurance won't help if you're going to jail.


I think we may be arguing different things.

Almost all employees have the possibility of criminal negligence based on their work. For programmers, this could mean that if you fuck up the code for a pacemaker and someone dies, you could go to jail. That’s a big risk and I can’t find any programmer who has been found culpable for someone’s death. This is the current law in the US.

If Uber was negligent in its code, then the programmers could go to jail. They have programmers and they work and assume this extremely low risk.

Now maybe you’re arguing that some special law should or should not exist for Uber drivers.


This happens all the time in aerospace. You need to sign off the software personally and you need to be an accredited engineer to be allowed to do that.


If the only options you’re presenting are “move fast and break things” where those things are human lives, or introducing burdensome bureaucracy, I’ll take the bureaucracy. Time and again society chose that latter option, and it will again. Unaccountablility is worse than regulation, and history has shown that repeatedly.


This is quite the strawman is it not? I said nothing about "move fast and break things."


What software developer would ever sign on to a project where they could be held criminally liable for a single bug? Do you want software development to turn into healthcare, where every developer needs millions of dollars of malpractice insurance? Because shit like this will turn it into a healthcare like system real quick.

How else to interpreted that? When a single bug can cause loss of life, and given that this in a thread about Uber, it’s hard to draw other conclusions. By all means though, offer another perspective on how regulating industries with significant number of lives on the line can’t manage regulation. While you’re doing that, I’d point to the aerospace sector which seems capable of both innovation and regulation.


There's a difference between holding someone criminally responsible for a bug in code that they wrote, and some sort of regulation. They are not the same.

For example: https://en.wikipedia.org/wiki/Boeing_737_rudder_issues


Bad example for two reasons. First:

Although the NTSB investigated the accident, it was unable to conclusively identify the cause of the crash. The rudder PCU from Flight 585 was severely damaged, which prevented operational testing of the PCU.[3]:47 A review of the flight crew's history determined that Flight 585's captain strictly adhered to operating procedures and had a conservative approach to flying.[3]:47 A first officer who had previously flown with Flight 585's captain reported that the captain had indicated to him while landing in turbulent weather that the captain had no problem with declaring a go-around if the landing appeared unsafe.[3]:48 The first officer was considered to be "very competent" by the captain on previous trips they had flown together.[3]:48 The weather data available to the NTSB indicated that Flight 585 might have encountered a horizontal axis wind vortex that could have caused the aircraft to roll over, but this could not be shown conclusively to have happened or to have caused the rollover.[3]:48–49

On December 8, 1992, the NTSB published a report which identified what the NTSB believed at the time to be the two most likely causes of the accident. The first possibility was that the airplane's directional control system had malfunctioned and caused the rudder to move in a manner which caused the accident. The second possibility was a weather disturbance that caused a sudden rudder movement or loss of control. The Board determined that it lacked sufficient evidence to conclude either theory as the probable cause of the accident.[2]:ix[3]:49 This was only the fourth time in the NTSB's history that it had closed an investigation and published a final aircraft accident report where the probable cause was undetermined.[4]

Second:

In 2004, following an independent investigation of the recovered PCU/dual-servo unit, a Los Angeles jury, which was not allowed to hear or consider the NTSB's conclusions about the accident, ruled that the 737's rudder was the cause of the crash, and ordered Parker Hannifin, a rudder component manufacturer, to pay US$44 million to the plaintiff families.[16] Parker Hannifin subsequently appealed the verdict, which resulted in an out-of-court settlement for an undisclosed amount.


You interpret it as written, which is that holding developers routinely criminally liable for bugs is going to have very negative effects. One of them is that the only developers you'll get are precisely those too unwise to realize what an incredibly stupid deal that is, no matter what the pay rate is. I don't think I'd like to see all my critical software written by such "unwise developers".

I have no problem "piercing the veil" for egregious issues. I'd have no problem holding a developer liable for failing to secure a project but just continuing on rather than quit. But "Let's just hold all the engineers criminally liable all the time!" is a bad idea and it is not already done for a reason.


It’s not done because software development is an unregulated shitshow full of wildly unethical companies scrambling for the bottom. It’s not unlike early aerospace, or early medicine, or any frontier which develops rapidly before legal frameworks inevitably close in.


It's also not done because it is mathematically impossible to certify software as 'bug-free' in the general case.

Software isn't like civil engineering where you can mathematically prove that a design is sound.


This is not true at all. First of all, there's no such thing as being able to mathematically prove a design is sound in any engineering discipline, software or non-software. After all, it is infeasible if not impossible to encapsulate all the details of the implementation of _any_ system in mathematics or any other system of reasoning (down to every last atom, if you stretch your imagination).

All we have in engineering (non-software) is something like safety factors and confidence, and this is done with (usually) rigorous mathematical models as well as loads and loads of testing to fill in the gaps of mathematics (think unknown constant/parameters, assumptions, etc).

None of this is impossible to do for software. There are systems that enable one to do easy/entry level verification (such as something like TLA+), to much more complicated reasoning (something like COQ). This will allow the system designers to gain confidence in if the system will work and gain understanding about under what scenario they will fail. Contrast this with the existing software landscape, which is mostly, at least from my perspective, just let me write some stuff until things do approximately what I want. Even at the top of the ladder, I feel the tests conducted are "adhoc" at best and with none of the rigours that you associate with traditional engineering fields.


Healthcare costs are not unreasonable in much of the developed and developing world. Most countries have better outcomes and lower costs than here in the US. As another commentor says, healthcare seems to be doing fine; you seem to be assuming the US is the norm when it isn’t.


> Healthcare costs are not unreasonable in much of the developed and developing world. Most countries have better outcomes and lower costs than here in the US. As another commentor says, healthcare seems to be doing fine; you seem to be assuming the US is the norm when it isn’t.

I'm going to cauterize the off-topic debate about the US healthcare system by pointing out that OP was talking about the expense to doctors of malpractice insurance, not about costs to the patients or medical outcomes.

Malpratice liability varies widely by country, but it's a non-trivial expense for doctors everywhere, and significantly higher in states with strong tort liability for doctors.

It's hard to imagine a world with criminal liability (or tort liability) for software engineers that doesn't ultimately end up with a system of insurance for engineers, roughly analogous to the medical malpractice insurance system for physicians.


I am afraid that you are introducing the off-topic debate. The end goal of healthcare is better outcomes for lower prices. Likewise, the end goal of engineering should be better technology for lower costs.

That healthcare in other countries is able to achieve this in spite of the medical malpractice insurance system points to the fact that such a system is not certain have to have the deleterious effects you confidently assume.

Whether it is a burden for engineers is another question. But the article and the discussion aren’t about the inconveniences faced by the engineers who programmed this system.


>system of insurance for engineers

Which, as someone else noted, exists and is probably a good idea if you're an independent consultant or possibly a professional (i.e. licensed) engineer who signs off on drawings or other documents for clients or regulators.


I'm sure plenty of people would but that isn't the point. If you're writing code that potentially costs people their lives, you need to be able to be held accountable otherwise it will lead to negligence. This isn't a new problem... maybe for the software space, but not for industry as a whole.


Humans don't suddenly become perfect actors just because incentives align. The stress of that risk and efforts taken to mitigate it seems like it would actually make the software worse.

It's up to the product (the collective of individuals that deliver the product) to address and mitigate the risk it creates, that's not solely on the shoulders of individual software contributors.

If A writes a generic computer vision algorithm and open sources it, B integrates that into a "is this a bomb or not" product with a white paper outlining its failure rate in a specific situation, then C sells that product to D who uses it in an entirely different situation and E gets blown up... who gets sued? It definitely should be somebody, there should certainly be a liability and incentive to avoid such a liability but I it probably lies somewhere in C-D space, not A-B space.


I partially agree. “There is no reason the person who programs or the person who makes money from a self-driving car...”

The person who profits the most should be held the most responsible. But the separation of roles between the executives and the developers is likely to mean that no one gets punished at all.


Why should the person who profits most be held the most responsible? Suppose the person in charge of safety clearance makes less money than the original developer. How does that shift responsibility to the developer, as opposed to the situation where the developer makes less money?


They do have say in who they work for and what they work on. It's not as if a developer capable of doing self-driving work isn't in high demand. Maybe they should stop working for unethical companies doing unethical things. It's not as if the executives could do this work themselves.


> You can’t replace yourself with a computer program and then choose to take only the benefits and not the responsibilities.

Look, I'm all for developers and (software companies in general) to be considering the ethical implications of the work they do, and the moral obligations that they take on as a result of it. However:

> We need to discuss how the developers self-driving cars will be held accountable for the crimes they commit. There is no reason the person who programs or the person who makes money from a self-driving car should be held less accountable for a crime that if committed directly by a person would almost certainly result in jail time. You can’t replace yourself with a computer program and then choose to take only the benefits and not the responsibilities.

This is a bad mentality to take with postmortems for software failures in general, at least from the outset. You need to look at the underlying factors that contributed to the issue, not simply looking for a person to assign blame to. It's possible that negligence is the underlying cause, but not necessarily - and even if negligence is a cause, what were the other cultural factors that led to the negligence happening, without being caught somewhere else in the pipeline? It's tempting to look to assign blame, but if you do that, you'll actually miss out on the systemic improvements that would be necessary to prevent similar incidents in the future.

But moreover, this is a bad outlook to take here, because this wouldn't be criminal behavior if committed by a human. From the best we can tell, given the details available so far, it's an accident, and it's very rare for criminal charges to even be considered in accidents like these, unless it's a hit-and-run.


> this wouldn't be criminal behavior if committed by a human

I can assure you that a human who is driving carelessly would be held criminal liable. Why do you assume an accident that was severe enough to have resulted in a persons death—the car didn’t just scrape them because they ran across the street—is not due to a reckless programming?


There are soooo many instances of negligent drivers killing cyclists with basically no follow-up from the police. Police all over the US seem to consider cyclists as second-class road users, and trust the driver when they say a cyclist "came out of nowhere". Since these sorts of collisions are more often fatal for the cyclist than the driver, there often isn't anyone to tell the other side of the story. There are rarely criminal charges, and even more rarely convictions (juries are mostly drivers, not cyclists).


> I can assure you that a human who is driving carelessly would be held criminal liable.

Do you see evidence that the car was "driving carelessly"? That's an honest question - from the reporting so far, it doesn't seem clear what the underlying cause was.

Secondly, this is demonstrably false: most pedestrian fatalities by vehicles do not result in criminal charges. If you don't believe me, look up the stats. Or talk to the countless bikers' advocacy groups that have been lodging this exact complaint for decades: drivers are not generally held criminally responsible, unless there are mitigating circumstances (the driver is drunk, the accident was a hit-and-run, etc.).

> Why do you assume an accident that was severe enough to have resulted in a persons death—the car didn’t just scrape them because they ran across the street—is not due to a reckless programming?

When a pedestrian dies, just because they died, that doesn't mean the driver is automatically responsible. It could have been the pedestrian's fault, or it could have been the driver's fault. Or it could be both. Or it could even be neither (a true accident, with no assignment of blame).

The same thing holds here. You can't assume that this is the result of "reckless programming", and to be entirely blunt, by jumping to that conclusion on the basis of literally no evidence whatsoever (and misinterpreting existing case law on vehicular accidents in the process), you're actually undermining the success of any future efforts to prevent these sorts of accidents in the future, whether or not it ultimately turns out to be the fault of someone at Uber.


You have good points, thanks for discussing this. I think for me the fundamental problem is that with a human we can characterize reckless driving as driving that a normal, competent human would not do. But there is no “normal, competent” self-driving car-so by what standard do we determine the program’s behavior to be reckless as opposed to just acceptable?

I accept your point that this accident might not have led to criminal charges if a human had been responsible. But I don’t waver on my argument that if a human driver would have been held criminally responsible for this accident, then we should we hold the executives (or in extreme cases programmers) of Uber responsible in exactly the same way, whether that be criminal or not.

Finally, with humans and pedestrian fatalities many cases involve drunk driving or sleepy driving. Self-driving cars can’t get drunk or sleepy; they can just have bad programming or bad hardware, both installed by their manufacturer.


If you hold developers responsible, you can kiss self driving cars goodbye.

What should be passed (but I can't see how) is a percentage of allowed deaths, at least in the early years, and set it to something like 5-10% of the current rate, reducing downwards to 1% after 20 years.

People will die from self driving cars, and undoubtably their will eventually be a case that is 100% the self driving car's fault. The benefit of self driving cars comes from the mistake being permanently fixed, while with human drivers it can be committed over and over again.

There needs to be some kind of protection on the companies (and obviously the developers, I've never heard someone try to say they should be held responsible before) from lawsuits. Otherwise all it'll take a is a small handful beefore companies will just let it die.


> If you hold developers responsible, you can kiss self driving cars goodbye

Civil engineering and medical device manufacturing seems to be doing fine, despite having similar principles of engineers' liability.


> Civil engineering and medical device manufacturing seems to be doing fine, despite having similar principles of engineers' liability.

The idea that software engineers should be held responsible for something that (as far as we can tell so far) was an accident and not the result of negligence or malice is several orders of magnitude beyond the level of liability that civil engineers and medical device manufacturers have.


> that software engineers should be held responsible for something that (as far as we can tell so far) was an accident and not the result of negligence or malice

Nobody said that. The original comment said developers should be held "accountable for a crime that if committed directly by a person would almost certainly result in jail time" [1].

The standards from medical devices and/or civil engineering, with the associated licensing requirements and verification processes, make sense. Even in the case of a careless mistake or strategic oversight, individuals who could have known but nevertheless signed off should be identified, if not explicitly punished.

[1] https://news.ycombinator.com/user?id=jonathanyc


> > that software engineers should be held responsible for something that (as far as we can tell so far) was an accident and not the result of negligence or malice

> Nobody said that.

Well, they quite literally did, because the original comment in this thread was:

> We need to discuss how the developers self-driving cars will be held accountable for the crimes they commit. There is no reason the person who programs or the person who makes money from a self-driving car should be held less accountable for a crime that if committed directly by a person would almost certainly result in jail time. You can’t replace yourself with a computer program and then choose to take only the benefits and not the responsibilities.

I guess you can quibble about the difference between "accountable" and "liable", but that's not a discussion that's particularly interesting to have here, especially given OP's other comments in this thread which make it quite clear that this is what they had in mind.


The quibble in this case would be the meaning of "developers." In the case of a medical device, the developer is considered to be the Manufacturer, not the specific software developers on the team. Considering how often teams change, etc., using the latter definition would be meaningless.


If it was an accident and not the result of negligence or malice, what is the crime for which the developer would be prosecuted?

If the developer was negligent or malicious in their duties, why not prosecute them?


An act doesn’t become okay because two people (the executive and developer) and a robot are now responsible instead of one. What is your justification for the sort of utilitarian calculation you’ve made here? Why do you assume self-driving cars will be safer without any evidence?

If we are going to be arguing from a utilitarian standpoint, suppose we hold the executives of self-driving car companies as responsible as if they were themselves drivers. Then if self-driving cars truly are safer as you optimistically claim, both fewer people will die from accidents involving them and fewer people will go to jail for those same accidents. Seems like a win to me.


[flagged]


Why is it at all quite obvious? How is arguing for being careful “such a stupid argument it’s really just not worth anyone’s time to entertain”?

Someone in this discussion has an insane amount of blind faith in technology which here literally killed a pedestrian, and it’s not the people who are arguing for just consequences.


Are you arguing that a machine does not have better reaction times than a human being? Are you arguing that a machine can fall asleep, drink and drive, panic in a high stress situation?

Aren't you the same person who called for holding the developer liable for writing software with a bug? Are you accusing the developer of promising something that is impossible (not hitting a pedestrian in a crosswalk?) or simply implementing it wrong?

It's worth pointing out that we have no idea yet who is at fault in this accident. It could easily be someone who simply walked out in front of traffic when they weren't paying attention.


"Are you saying X" is a pretty aggressive way to frame your argument.

The above poster seems pretty clear that it is NOT obvious that cars will necessarily drive safer than humans on average, in the same way it is NOT obvious that we will ever have General Artificial Intelligence.

These are very complicated problems, and the machines are currently (significantly) worse than human drivers, so I think it's fair to question the argument that "everything will work out eventually"


I think the idea that self driving cars "may not ever be" safer than human drivers is ludicrous, even fatuous. We set an abysmally low bar for safety.

I think that is why the very assertive "Are you saying..." is appropriate.


Maybe you're right, progress is inevitable.

But humans always overestimate the rate of progress, and think we will be living in some amazing futurescape in the next 10 years.


The answer to your first “question” of course is that it depends on the machine, how it’s built, programmed, and the context of operation. Machines can have much faster reflexes, or they can freeze.


Machines will have slower reaction times.

Theoretically it could be otherwise, perhaps, though the human brain has an extremely parallel pattern-matching engine honed by about half a billion years of evolution.

Realistically, the self-driving system will be made of layered distinct components that all add latency. This is how we build both hardware and software. An image is sensed, it gets compressed, it gets passed along the CAN bus, it gets queued up, it gets decompressed, it gets queued up again, object detection runs, the result of that gets queued up for the next stage... and before long you're lucky if you haven't burned a whole second of time.

Machines can drive aggressively.

There was a university that had self-driving cars do parallel parking... by drifting. Driving along, the car would find a parking spot on the other side of the road. It would steer hard to that side, break traction, swing the rear of the vehicle around sideways through a 180-degree turn, and finally skid sideways into the spot. The car did this perfectly.

That kind of ability is something that I personally don't have. I would consider a self-driving car that could do this. If I'm paying, and that kind of driving is my preference, I expect to get it.


I really don't want you to get your wish. We have no need to invest in flashy self-driving car stuntmen, building a car that can get you from A to B safely and in a reasonable time frame is all that we should be aiming for.

That sort of drifting parallel park might work most or nearly all of the time, but if the road conditions are poor and the car loses handling then it will be a lot more risky.


The camera feed going straight to the neural network will not have a lot of latency. The neural net will not take very long to process the image and make a decision. Humans need at best half a second and at worst several seconds to recognize, process, and act. These systems are designed to be fast to responds. They do not have a second of latency.


> What should be passed (but I can't see how) is a percentage of allowed deaths, at least in the early years, and set it to something like 5-10% of the current rate, reducing downwards to 1% after 20 years.

Whith so few self driving cars that number sold be zero. If you can't assure safety with a few cars whith a human as backup, you should not be in the streets. And it's not the first dangerous accident of an Uber self driving car where Uber was at fault.


I'm guessing people are downvoting because of the implication that the software team should be held responsible?

Something does feel wrong about punishing them when the decision to put the car on the road in the first place was almost certainly not their own.

Though I agree Uber should be held accountable for it and it shouldn't be a token fine since the whole point of punishing an accident like this is to discourage them from occurring in the first place.

This sort of accident orchestrated by a group of people probably won't be gracefully handled by our legal system.


The developers have no control over the sensors, tires, weight of car, testing budget, human backup operator, or a million other things that went into this happening. Hell developers probably told who ever they could to not release this. Management at Uber and who ever approved this thing should be held accountable.


> The developers have no control over the sensors, tires, weight of car, testing budget, human backup operator, or a million other things that went into this happening

If one of those things caused the accident, the developer isn't to blame. Civil engineering has experience tracing liability from mistakes (and incentivizing prevention).


The realistic alternative isn't California-esque regulations, it's a country with poorer (more disposable) people. Arizona is a nice compromise, at least we can see what's happening.


You've got to be kidding or sarcastic about the poorer=disposable part, right?


I don't personally view poor people as disposable, but I'm not kidding about what I believe would happen.


It doesn't seem to happen in practice. The companies could test in 3rd world countries but none have as far as I'm aware.


Driverless car testing hasn't moved to another country because AZ volunteered to let some of its people get run over by driverless cars.

There's a reason why high pollution manufacturing moved to China. Because China was more willing to let their people die in exchange for jobs.


"ha ha only serious" it's an observation of how the world is (see: factories and recycling operations in Asia, mining in Africa), not how it ought to be.


Oh, you sweet, summer child.

There's a reason sweat shops and slave labor happen more in some places than in others.


Would you please stop posting uncivil and/or unsubstantive comments to Hacker News? We eventually ban accounts that do this, and have already asked you once.

https://news.ycombinator.com/newsguidelines.html


1500 comments before knowing the real reasons. It could be suicide. The dead woman has history of substance abuse and arrested 6 times. http://www.dailymail.co.uk/news/article-5519433/Self-driving... http://fortune.com/2018/03/19/uber-self-driving-car-crash/


It needn't be suicide; from the second link, the victim may have stepped from a shadowy spot/out of view. A car cannot avoid what it can't see.


It seems like a bunch of what-ifs that normally come up with self-driving cars are about to get answered and precedents are about to be set.

I assume this case will also be one of the most well-recorded cases of a fatal car accident in history as well, given the amount of sensors and equiment on-board a self-driving car, along with eye witness testimony from the operator on board.

Can't tell if Uber has just been incredibly unlucky as of late or if just enough of their employee-base is incompetent as to prevent them from just having a quiet year with no large failures.


Hype cycles are beginning to cause loss of life. Self driving cars are being allowed to operate at a far faster rate than other transportation innovations would have. This is because of the enthusiasm around technology. We need to realize the difference between social media apps and steel.


Had the car passed the driving test?

There should probably be a specific, complex and comprehensive test for autonomous vehicles. Also I'd want to see shared liability, the company are driving it (by proxy through their software) so they should be liable to some extent.


>There should probably be a specific, complex and comprehensive test for autonomous vehicles

Maybe we should start with a specific, complex and comprehensive test for human drivers first.

And to build on the sentiment of this thread, every driver should finance their own test course to practice on before "subsidizing" their learning on public roads.


In the UK, for motorbike tests one does a computer based test, then practices with a company on a private off-road test area. One must pass a test then before being able to practices under instruction on public roads. Once sufficiently practiced one can take the test.

Basically what you propose?

As for comprehensive testing - I think the UK car and motorbike tests are quite good, not comprehensive, but they demonstrate a general ability across a range of skills. Humans can be expected to act relatively consistently (in such things), the test must be more comprehensive when treating a system that you can't expect to have innate consistency.


My personal hunch is that fully autonomous self-driving tech is not theoretically possible under currently known computational models, because it implies many well-known NP-Hard problems. Self-driving companies are betting on the ability to find a heuristic/approximation that works "sufficiently well". But I strongly feel that the chasm that needs to be crossed to be "sufficiently good" is not one of magnitude (i.e. we just need more testing!), but of theoretical boundaries, due to the existence of at-least two sub-problems which are not computationally solvable: 1. prediction of what pedestrians/cyclists will do next, and 2. accounting for sensor input distortion under bad weather conditions.

Humans can solve these problems due to life experience, not just driving experience. In other words, I think we're gonna need fully-conscious AI to solve self-driving.

The only way self-driving tech will reach production is if the input space is restricted, which is a significant-but-not-groundbreaking iteration on what we've been doing for decades with airplane autopilots and self-driving monorails. Sure, we can have self-driving cars on specifically designed freeways, but nothing more.


Do you believe human-driven tech is theoretically possible?

Current experiments and production data from human controlled vehicles have not been encouraging.


I added an edit to my comment: We don't have a computational/theoretical model for human consciousness. This is why it's called The Hard Problem of Consciousness.


That isn't the hard problem. That is simply having a theory of mind. The hard problem is understanding how the physical world gives rise to subjective experience (what causes conscious beings not to be philosophical zombies).


Oops, you're right! I've edited my original comment, thanks.


This is not a really strong argument against self-driving cars. The fact that a problem is NP-hard doesn't make it untractable. Every day we use apps that deal with NP problems (e.g., routing problems, packing problems, etc.). Also, note that there're P problems whose instances can be harder than (smaller) NP ones.


That's basically what human driving is though, no? We consciously and unconsciously take our attention off the task all the time. It's not possible to drive fully alert of surroundings all the time and at least part of the time we are simply dead reckoning in fairly safe lanes of travel at constant speed and direction with minimal pedestrian and cross traffic. I could call it "controlled chaos", "luck", "planning" but there is some amount of unknown when moving a multi-thousand pound object around, and as speed increases so do the chances, ability to optimally correct, and severity of mistakes. It is very interesting and challenging to map the morality of the decisions of moving these machines onto automation.


There's problem classes and problem instances.

What does NP-hardness look like for self-driving car tech? non-deterministic polynomial in: number of objects? number of lanes? time steps in the planning horizon? action/observation branching factor? These things are bounded in practice.

Not saying that the computational problems aren't hard. But ending the conversation at "NP Hard" throws away too many nuances.


NP in the number of data points received from all its sensors.


Aren't self-driving cars already doing much better than humans on the safety front? That's the standard to beat, not perfection.


1. I seriously doubt self-driving cars will be viable in irregular roads

2. Who do you blame when a faulty algorithm eventually kills a person? Who do you get mad at? When a drunk driver kills your family member, you can go to court and look at their face and look at the faces of their family members. When that happens with a self-driving car you'll just be looking at corporate lawyers who will shrug their shoulders at you and say "lol sorry our dumb machine killed your daughter".


2. Why do you need hatred? Why not appreciate that every self-driving death would be used to improve safety for everyone else, just like what happens with plane crashes, building collapses, etc. Also, you can't hate anyone if you kill yourself by accident. What if your loved one kills themself by breaking a road rule?


I think the self-driving initiative is not about beating the human counterpart at launch, but at least on par with the performance done by human. That itself has huge value in efficiency and saving time.

Slowly with many AI-driven vehicles on the road, we can "optimized" for a better performance on safety and other issues.


What if "good enough" is just reduces deaths/injuries by 99%? 90%?

People would still get hurt and die directly due to software that cannot be perfected, but the gains to society as a whole might be worth it.


> Sure, we can have self-driving cars on specifically designed freeways, but nothing more.

But at this point what makes it better than a train?


I'm not necessarily arguing for this, but one viewpoint might be: A vehicle that can autodrive the freeway (i.e. most of the way), and reverts to manual control for the shorter sections before and after the freeway, would have a significant convenience benefit over a train.

Take the train, and you have no vehicle with which to go from the destination station to your endpoint. Also, you could travel at any time instead of being held to the schedule.


Well, fair enough, I suppose.


Sorry, but what's the point of this?

> Elcock said he believed Herzberg may have been homeless.

I do not think that Herzberg possibly being homeless adds any meaningful information. On the contrary, I feel this may subtly support unconscious prejudices.


Spend some time in any major city and you'll see homeless people doing all sorts of reckless things around cars. Just last week I saw a homeless man jaywalk into oncoming 40+ MPH traffic without so much as looking or stopping.

In this sense, it is a possibly relevant detail of the story.


He said this in response to a reporter who asked if she was homeless: https://www.facebook.com/cnn/videos/10158139292591509/

Can't speak for the reporter who asked that, but I can take a guess about why it was asked. The victim's death and identity has been known for half the day. Her being homeless might possibly figure into next-day stories about why we haven't heard much from grieving family members. But yes, you're right, it is a detail that will figure into people's prejudices.


Curious how the numbers work out of number of miles [1] to number of deaths of autonomous vs standard vehicles.

I know the scale is different but since this is the first death I’m curious if the percents fall in line.

[1] is distance the right metric?


This is an interesting metric. Although self driving test cars are rare, the whole point of their existence is to drive so they probably clock more hours than a normal car.

About 5400 pedestrians are killed each year in the US. US drivers go 3.1 trillion miles a year. So they kill a pedestrian about every 5.7 billion miles. Last November, Waymo said they had 4 million self driven miles, so well short of statistically expecting to hit a pedestrian. In September of some unspecified year Axios claims Uber had self driven over 1 million miles.

My estimate needs help, the distribution of pedestrian bearing roads and pedestrian free roads likely does not match from my total miles per year number and what carbot testers cover. Also, this may have been a cyclist death, which adds another 800 or so deaths per year.

But, in any event, in rough numbers, Uber appears to have beat their expected time to pedestrian fatality by two or three orders of magnitude.


It occurs to me that because the deaths caused by autonomous vehicles may not follow the same distribution across types of deaths, it might make more sense to compare total deaths per million miles between human and autonomous drivers.


It appears that the current figure is sitting around 10 vehicle deaths per billion vehicle miles travelled.

Which seems unbelievably low. I'm getting these figures from this Wikipedia graph[1]

[0] https://en.wikipedia.org/wiki/Transportation_safety_in_the_U... [1] https://en.wikipedia.org/wiki/File:US_traffic_deaths_per_VMT...


Wow.

If deaths are broken out by pedestrian vs passenger, I wonder how people will respond if safety skews heavily towards one of those groups with self driving cars.


Distance is the metric used so far but also isn't appropriate for comparing self-driving cars to human driven cars. The human driven bucket contains all miles driven, highway cruise control, light snow, heavy rain, tricky merges, etc. The self-driving metric is only the easiest possible miles. Overtime those miles will expand and harder scenarios will be incorporated but to really know if self-driving cars are safer we need apples to apples comparisons which is going to require matching humans vs robots on miles driven and a categorization of those miles, maybe a count of tricky unexpected scenarios as well.


Perhaps time as opposed to distance? After all, we drive much fewer miles in a city, but there are much greater opportunities for accidents, particularly pedestrian accidents.


Maybe deaths per speed per time (or distance, since speed over time is the same as distance at speed). More deaths at higher speeds and fewer at lower would seem more likely. Purely using time or distance may skew the interpretation, if there are more deaths at some speed ranges than others.


Yes, these statistics are typically based on miles driven.


Why is Uber allowed to test its software for free on public roads ? Why doesn't it use a purpose-built test-track of its own, where it must prove that its software won't kill people (using robotic crash-test dummies to avoid) before it is allowed to use public roads ?


I just don't trust Uber and its engineers enough to trust them as a self-driving car company. Even before this incident.

Their organisation has been shown to be deceitful, their CEO to be devious. I can completely believe they would cut corners or claim safety where they know there is risk. To get to market ahead of the competition knowing that the fate of road users lay in a dice roll.

Technically they had to resort to stealing Waymo's IP to get ahead, and their self-driving cars have previously been shown to behave recklessly.

I do, tentatively, trust Waymo. I've seen enough about the resources they've put into this, the extensive testing they've done, and their safety record so far to at least give them a shot.


Why am I not surprised that this happend with an Uber self driving car?

In fact, their culture of ignoring rules and common sense might be okay for business development, but with respect to human safety it is just irresponsible and inhuman when it comes to self driving cars.


It's not hard to believe that self-driving autonomous cars will make the roads safer overall.

The issue is that while fewer lives will be lost, it will be a different set of lives than would have been lost without self-driving technology.

Those lives will have their day in court.


> Car was autonomous with driver behind wheel

Let me make a prediction. Uber will claim that the driver was in control of the vehicle at the moment of the accident. And the "driver" will receive a bonus payment in the coming months.


This paradigm-shift is a chance to address one of the leading causes of preventable death in the world.

It doesn't surprise me for a moment that this was Uber (though it might have been a Tesla). From all I've seen and read, these companies are racing unscrupulously, and some have inferior technology compared with others.

Waymo seem to be approaching the problem from a safety direction rather than a pure race for profit.

We need to hold SDs to a higher standard than we hold human drivers. Not 1.5x the standard; 1000. 10,000. 100,000.

And every death must be treated as a serious failure of engineering. These are preventable deaths.


I know quite a people working on this at Uber ATG. They are in panic mode.


Really? This is a completely new tech, and process. How is any reaction "a standard move?" I guess it could be standard for any new tech to result in a death from autonomous machinery. In any case, things like this are why I don't think fully autonomous is nearly ready in city environments. I'm not saying it isn't close, and shouldn't be worked on. I just don't think the intelligence systems are near as worked out as they should be. Not to mention the level of sensors needed.


I'm sure that pile of data will gain intelligence if we stir it hard enough. Any moment now.


Given how incompetent Uber is at regulation of anything else in their company, is this really a surprise?


A while back I was riding my bicycle home from work and saw one of the Uber robocars coming the other direction while I was making a left turn so decided to test their reaction algorithm by starting my turn while they were still approaching -- car didn't even flinch while a human driver would've at the very least honked at me.

As an aside, I've also been hit in a bike lane in Tempe by a human driver unfortunately, if it were a robocar I'd be living on the beach somewhere off the settlement check.


I would hate to have been the (backup driver) in this sad situation. Regardless, the regulations couldn't come soon enough.

From SF Chronicle [1]:

<QUOTE> The self-driving Volvo SUV was outfitted with at least two video cameras, one facing forward toward the street, the other focused inside the car on the driver, Moir said in an interview.

From viewing the videos, “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway,” Moir said. The police have not released the videos.

The incident happened within perhaps 100 yards of a crosswalk, Moir said. “It is dangerous to cross roadways in the evening hour when well-illuminated, managed crosswalks are available,” she said.

...

“I suspect preliminarily it appears that the Uber would likely not be at fault in this accident, either,” Moir said.

However, if Uber is found responsible, that could open a legal quagmire.

“I won’t rule out the potential to file charges against the (backup driver) in the Uber vehicle,” Moir said.

But if the robot car itself were found at fault? “This is really new ground we’re venturing into,” she said. </QUOTE>

[1] https://www.sfchronicle.com/business/article/Exclusive-Tempe...


Forgive my ignorance, but isn't this not supposed to happen with LIDAR equipped cars?

My understanding is that a LIDAR provides a completely accurate 3D map of the surrounding area around a self driving car. This is in contrast to Tesla's image recognition approach which makes a 3D map from 2D images. So is seems like a pretty giant bug if a self driving car ever crashes into another object with LIDAR, since it should always know the location of external objects.

Please correct me if I am wrong!


You're not wrong, but it is more complicated than that. Lidar gives you an array of 3D points and intensity, corresponding to where the lasers bounced back, and how strong the reflection was. Roughly speaking, from there you have to decide 1) which sets of points belong to the same object, 2) what those objects are, and 3) what those objects intend to do in the future.

So yes, a lidar-equipped AV completely not sensing a pedestrian would be surprising, but you can see how it might have incorrectly classified the pedestrian, or misunderstood the pedestrian's intent.


Something of a self-driver car Rorschach test. On the one hand there is the tragedy (car hits person, person dies), on the other there is the technology (computer doing the driving), and then there is the fog of reporting where actual data is hard to come by because people are people and report on the things they saw/heard that were important to them.

Last night there was a story on CBS about the first 'self driving bus' going live in San Ramon California. [1] Where the reporter steps out in front of it to see if it will stop (spoiler alert, it does).

And while it was a tragedy, its unfortunate that because it was a 'self driving' car this fatality gets more coverage than the thousands who are killed by 'human driven' vehicles. Bicycle advocacy groups have been arguing for years that better, separated, bike lanes would save lives. Perhaps the companies behind self driving can get behind that effort to protect bike riders from humans and make it easier on their cars.

[1] https://www.cbsnews.com/live/video/20180319105443-california...


So this is it then? First fatal accident involving a fully self-driving car which might have actually been the car's fault?

Well, it was obviously going to happen sooner or later. It'll be interesting to see what the fallout is. Up until now regulators have been, surprisingly, taking a pretty relaxed approach to regulation of self-driving cars. Hopefully this one accident doesn't impede development on other autonomous car programs too much.


Very Relevant video on what the discussions about 'the ethics of self driving cars' miss: https://www.youtube.com/watch?v=ozcaLnTuidU

Summary: We spend too much time talking about what decision the car should make in these situations, not enough on how allowing a corporation to make those life and death decisions changes our society.


Situations no self-driving car can avoid:

  - philosophical dilemma
  - physics-constrained reaction times
  - actions that violate the rules of the system
Solutions:

  - protected car lanes
  - protected bike lanes
  - protected pedestrian lanes
Reasons why these solutions are not put in place:

  - cost
Until humans determine that the cost of human life is higher than the cost of upgrading infrastructure, we should accept human death as a regular part of autonomous driving, just the same as we do for non-autonomous driving. 37k dead people every year in the US due to human drivers.

Top reasons for auto accidents today include inclement weather, reckless driving, speeding, driving under the influence, and distracted driving. In theory, most of those could be solved by autonomous driving. But then the list of reasons for accidents would change to whatever new reasons cause autonomous car accidents, such as damaged sensors, programming errors, equipment failure, road hazards, etc.

Even with autonomous cars, we will still need protected lanes, and we will still never implement them, because we don't really care when people we don't know die.


If by "protected passenger lane" you mean putting a Jersey barrier between the sidewalk and the road, no thanks. Maybe I'm a spoiled suburbanite, but I can't help but recall that, scant a century ago, pedestrians could walk along or on the streets wherever they pleased.


Not specifically that one way.

In order to allow transportation to co-exist with pedestrians without collisions, you need some kind of separation between the two. With subways, the protected lane is literally underground, but it does definitely have a protected lane. If you don't go under ground, you can go above ground, like several subways and metros do around the world.

If you don't do either of these, you have to make concessions on the ground level. My personal preference would be tall fences around the roadway, and pedestrian bridges that go over or under the roadway (but both have problems). Another would be to still have the fences, but automate some sliding barricades that would activate when traffic halted, which is somewhat like how train crossings work. We could also implement hybrid methods, like that at Shibuya crossing, for very congested intersections.


I know when Uber was stopped by CA DMV to operate its self driving cars without a permit, the Arizona governor went all out to promote the state as a beacon of business friendliness and deregulation etc. etc. Perhaps, when something is as critical as self-driving cars going around along with pedestrians and bikes and other vehicles, a more cautious and deliberate approach is warranted ?


I'm saddened by this incident but have thought a lot about this eventuality. there are a few levels of societal acceptance of self driving car death outcomes i can think of:

A - Human equivalent: self driving car obeys reasonable rules that a human also would eg minimum speed on a highway and kills someone

B - Trolley problem: (very artificial scenario) self driving car has to choose between killing N or M people where N > M and wrongly (in hindsight) chooses N

C - Car fault: self driving car kills someone in a situation where no human would have

As a society we would probably accept A easily but B starts to get shaky. C currently looks completely unacceptable, BUT I would argue that society has to get to a point where even C is ok conditional on the probable result that overall car deaths decline dramatically.

In other words we will have to get to a point where individual deaths are extremely regrettable but the overall death reduction of adopting self driving cars are so undeniable that the individual deaths can be discussed without also talking about banning SDCs in the same breath.


How does it work out? I mean you got a situation wher SDC kills someone where no human driver would, but somehow it gets safer?


yes, because in aggregate the deaths are still reduced. we need society to get past individual death stories (which are very sad and we should do everything we can to prevent it) and not let them distract from the overall need for SDCs


The self driving car craze baffles me. I would only trust it if it were on a road made for it and the cars communicated to one another.


Or at least have the self driving cars driving only along specific routes well marked, well maintained autonomous driving routes similar to light rail/bus lanes so pedestrians know to be extra careful.

And for long haul trucking applications, freeway lanes at specific times of late night/early morning could be dedicated for autonomous driving.


There are only two paths from here - an escalation of the scramble to real world driving hidden under self-serving rationalisations, ("the bigger picture") or a step back to some industry self-regulation, where self-driving just does not happen till the sensors, the maps and the algorithms are much much much improved.

We know very little about the circumstances here, apart from the obvious tragedy, but given that we learnt from Tesla that you cannot rely on cameras alone, whatever is learnt from this must be shared amoung all the industry and serve as the baseline for future.

I would prefer an outright ban on driverless cars in real world settings (until perhaps each competing car can safely navigate the worst sensor data other companies can offer) but as a minimum we need a neutral clearing house similar to the airline industry.

There are on this forum enough industry insiders, plus a sprinkling of regulators that a global self-regulated path could easily be forged.


Given I've seen (just last year) an Uber self driving car run a red light in SF, nealy hitting several people - this is not at all supprising.

I do think Uber should be charged if the car is found at fault. Not a civil case, criminal. We know machine learning algorithms fail all the time (like this). It shouldn't be alright to just ignore this.


> Not a civil case, criminal.

Has there ever been a case where a car manufacturer is charged with a crime for a defect that results in a death?


I don’t know about cars, but there is precedent for company officers for being criminally liable for negligent homocide. Even in non-homocide cases, individuals sometimes face criminal charges, as the Volkswagen engineer behind the emissions cheating scandal found out.


Of course it was Uber. I'm sure they don't even give a fuck any farther than the cost to their brand either.


Wow, this is sad.

I always imagine I would feel uncomfortable when developing software in certain areas such as this, or e.g. for space rockets containing humans. Errors have a much higher impact than when only developing web applications where the worst case scenario is a tempoarily unreachable website, or maybe some data loss.


I wonder how this will be treated legally.

Police can't just write "killed by python script" as a cause of death, that would be crazy, and there is no legal framework for cases like that. My guess is that A.I. could be equated to cruise-control, meaning person behind the wheel is responsible.

Maybe someone knows better?


People are injured by malfeasance of a corporation involving many employees all the time, such as when a bridge collapses- There is nothing that says a police person needs to immediately find a single guilty person 10 minutes after an accident like this.


I agree, for large engineering projects, such as bridges and buildings, obviously many employees are involved and in many cases you can't even single out someone who is responsible.

But, to my best knowledge, there are no "corporations" on the road. No matter what company this person is driving for and under what type of contract, this person is responsible for all kinds of incidents.


>>The pedestrian was outside of the crosswalk. As soon as she walked into the lane of traffic she was struck.

>>Tempe Police Chief Sylvia Moir said that from viewing videos taken from the vehicle “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway.

A lot of comments on here are faulting machine learning for this or algorithmic flaws, but unless we see the video it is really hard to blame the algorithms. Based on the quote from the story, it sounds like someone stepped out into the street, not in the crosswalk, at night, directly in front of a car going 40 miles per hour. How could a human or any algorithm cope with that scenario?


My non-expert opinion is that we will not see self driving cars become reality in our lifetime. I don't understand why anyone believes training on some videos/pictures to calculate some probabilities would ever be enough to handle all the complexities of the real world.


I think it would be insane to expect 100% safety from any technology, let alone new technology. People do crazy things on the road, I am witnessing it almost daily. And if you look up dashcam videos on the internet, you'll be scared to ever drive again. Expecting that some magic technology can deal with all this craziness with 100% safety is impossible. And then we'll add bugs and malfunctions, which absolutely every technology has, especially - new one.

So, brace for the incoming wave of similar news. These things will happen, and these things will be heavily publicized. Hopefully it won't cause stupid overreaction to the tune of banning the technology altogether or surrounding it with a bubble of regulations so think as to make in infeasible.


That car has to have a dashcam -- what's the current caselaw with regards to the 5th amendment?


I'm no lawyer, but I imagine that it wouldn't be hard to get a warrant for the dashcam footage. The fifth amendment is about speech, not possessions. Otherwise, all warrants would violate the fifth amendment.


I love how everyone is throwing fuel on the fire here with no actual details as to what happened in the accident. It could simply be she ran out into oncoming traffic and got hit. Hard to say a human driver could have performed any better. The devil is in the details.


Agreed. We need a lot of time to digest this. At least anything before the NTSB investigation is jumping to conclusions.


Why is Uber allowed on public roads?


This has little to nothing to do with the specific company (although is a convenient deflection ATM). The very idea that pre-programmed computers can deal with driving is the fundamental problem. In time, people will realize that, but at the moment the "solution" is to "patch" and make more laws to hide the truth: AI has nothing to do with intelligence.

Ultimately they will blame humans, humans are the bugs in their code.


As far as I know all self-driving car programs use some form of machine learning, so they aren't exactly "pre-programmed" in the sense of good old-fashioned AI based on strict, pre-determined rules.

What exactly do you mean by "deal with driving"? Drive without a single accident, ever? That's obviously impossible in practice. Drive better than humans, who in 2016 killed 37,461 people in the US alone? I don't see how that would be impossible - human drivers have a limited field of view, slow response times (average time to break is ~2.3 seconds), and are frequently distracted, sleepy, drunk, etc.


Let me know when self driving cars can get sleepy, distracted or drunk. Until then, I'm much more concerned with "stop before hitting things 101". This "pre programmed cars must be better than X" is getting old fast.

This is one of those self correcting problems, and it's going to be fixed way faster than the startups pushing this "smart machines" propaganda are going to like.


"Artificial Intelligence has nothing do to with intelligence" welp

I don't even know what to make of that


I'm not the original poster, but maybe they just mean that as it stands right now, artificial intelligence refers to nothing that is actually intelligent in a way that an average person probably thinks about intelligence, and it's a little hard to see where the leap to actual intelligence is going to come. Maybe?


I really don't understand the need to put a self driving car around pedestrians or other human drivers. As with any machine learning/control system, wild instabilities can and will crop up. Why don't we simplify the problem domain and keep well suited, designated infrastructure just for self driving vehicles, and solve the last mile problem with a slow 15mph creep to the ultimate destination. I mean, it's impressive that cars can drive themselves through a busy intersection alongside other human drivers, but it is by no means necessary, especially when adding humans into the mix adds a degree of uncertainty that is very difficult to account for...


> “The pedestrian was outside of the crosswalk. As soon as she walked into the lane of traffic she was struck”

Uber has some fucking shitty programmers guaranteed. They murdered this lady. Think about it:

Smart cars basically have 360 degrees of sensors. There’s no way it didn’t detect that lady coming up on the car and the fact that it kills her says to me there was no counter measure. It probably didn’t even slow down until the moment of impact. That's some fucking elementary school shit.

> Chance of Collision? 100%. Maintain Speed

In structural engineering if you designed a bridge poorly and it collapsed killing people you'd lose your ability to practice engineering. The same should be applied.


I wonder how companies handle these situations? Most places have clear requirements eg stop, check if anyone is injured, contact police, remove debris from road if safe to do so. These things aren’t so easy without a driver.


Based on landmarks from the photo from the news article, and the fact that the accident supposedly happened near Mill and Curry, I'm assuming that this was the location of the accident:

https://www.google.com/maps/place/N+Mill+Ave+%26+E+Curry+Rd,...


Uber must have a full video of this - just in case. I can hardly think they did not have a full record of this, as well as all miles traveled? It may be in these 1366+ comments, but I have not read them all to see.


I never understood the recent push towards self-driving cars, as it already seems perverse that the entire surface pavement has been dedicated primarily to motor vehicles and only secondarily to our actual selves, as pedestrians.

Putting further pressure on what should really be 2 discrete planes of activity (perhaps one in tunnels below ground or in the air, and I don't mean the pedestrians!)

Perhaps now the arrogance of such bad ideas becomes a little bit more clear to the tech set of folks, that in my view were excessively pushing such strange goals upon a public that wasn't clamoring for it.


To repeat my comment from a previous discussion which brought a lot of downvotes: what happens when (not if) a self-driving car runs-over and kills someone (e.g. because of a software bug)? Do such cases cause criminal penalties? Who is penalized? Or will all cases of autonomous car accidents with deaths become civil cases? If so - do human drivers get the same new rules or if they kill someone by accident (because they got distracted) they still go to jail? Is that fair?

In this particular case I assume the operator will be thrown under the bus, which is also unfair.


Why am I not surprised that, among all the teams competing in this space, many of which have clocked in far more miles than them, Uber is the first one to kill someone?

This may well just be bad luck. But I cannot shake the feeling that if Uber started an ice cream venture, they would store their molasses on a hill in a Boston. The only way to get “humanity” associated with Uber would involve an Uber zeppelin.

Or it’s a conspiracy, because nothing is as threatening to Uber than autonomous cars. This is certain to invite more regulatory scrutiny. Just kidding... I think..


You mentioned bad luck and that could very well be it. Unless Uber starts having more incidents, it isn't fair to say that Uber's technology is inferior.


Why is everyone concentrating on the software? There was a human in charge and the car was exceeding the speed limit. That sounds like either driving without due care and attention or reckless driving. After all the driver's reaction time is almost certain to be much longer when supervising an autonomous car than when driving directly making it much more important to adhere to the rules.

And if it can be shown that it was company policy to allow the cars to exceed the speed limit then heads should roll all the way to the top of the company.


It was exceeding the speed limit by 3mph, I don't believe this far exceeds safe margins.


Why not test these things on a private track? You have a multi-billion dollar company attacking this problem and it feels like all they tested before they hit the public roads was lefty and righty.


Looking further then the devastating tragedy at hand, it might be necessary to get the question of responsibility publicly answered. I dont think I am the only one here who has deliberately chosen not to work in fields where human lifes are at risk.

Despite the likely outcome, that the engineers behind this took every safety percussions, I think its extremely important, to get the message across, that people with an CS background with such use cases are going to be held to the same standard as for example mechanical and electrical engineers.


And yet (disclaimer - reflects only my personal view, might not correspond to the current state of affairs at Uber, it is possible that Uber cars are still safer than human drivers, but still - one who forgets the history tend to repeat it):

https://www.forbes.com/sites/samabuelsamid/2016/12/15/the-tr...


Let’s take the extreme example where a bug in a self driving semi truck leads to it mowing down 50 people on a busy sidewalk. How are the victims to get justice? In almost all incidents besides a human truck driver having a major health incident, that driver would go to jail but it seems that there will be little repercussions beyond fines for autonomous driving companies. I think once a mass fatality occurs people will scream for autonomous driving execs and programmers to be held liable.


The way I see it, by using the road, you're sort of giving up the right to revenge justice. It's a game of Russian roulette. People make mistakes and kill others or themselves no matter how careful they try to be. It's like asking where's the justice if you die from a heart attack or cancer? Those aren't crimes, they're bad luck.


Having these massive robots on our public streets better at least result in unfettered, transparent public access to all the data informing the thing in the time surrounding the crash.

This shift towards autonomous vehicles is utterly unnecessary. They could be simply improving the safety systems of the vehicles to intervene when the driver is about to cause an accident.

Instead they're doing the opposite - expecting the human to intervene when the robot is about to cause an accident.


I see a lot of detailed analysis in this thread, which is impressive given that literally nobody has read anything more than the half dozen factoids that have been released. At best, some of you are looking at maps of the incident site to figure out potential failure cases.

Knowing that you don't know something is as important as having the facts. To harken back to Cheney, these aren't even unknown unknowns, you're basing opinions on known unknowns.


Cheney? I think you mean Don Rumsfeld: https://www.youtube.com/watch?v=GiPe1OiKQuk


you got me


The article says almost nothing about it other than that the pedestrian was "outside the crosswalk." But the details are what matters. Without the details it is not possible to say if striking a pedestrian was avoidable. There is also a large grey area: Could the collision been mitigated? What other risks were involved?

The bad outcome would be that it turns out that pedestrian safety is too underdeveloped in this system to really be safe.


I have not been following the development of self-driving car tech very closely, but I have some familiarity with the difficulty of the challenges involved and I have a feeling that we are at least two decades away from having fully autonomous tech authorized for use on public roads. Am I underestimating the progress of the tech? I have the impression that there is a tremendous amount of unjustified hype in this field.


It is both an over and underhyped field depending on the area you are looking at.

HUGE difference between consumer self driving car [Everywhere, at all times] and the Machines as a Service [Geo, time, weather-fenced operations][1].

Waymo appears to be at the head of the pack; Sacha Arnoud, Director of Engineering for Waymo gave a talk a few weeks ago at MIT[2] and gives a good idea. We are about 5 years out until these start rolling out as MaaS (machines as a service). Probably 10+ years for level 4 highway operations for consumer models according to Frazzoli.

[1]Emilio Frazzoli, CTO of nuTonomy https://www.youtube.com/watch?v=dWSbItd0HEA [2]https://www.youtube.com/watch?v=LSX3qdy0dFg


You're not underestimating anything. Rather, the vast majority of developers (much less everyone else) have no familiarity at all with the challenges that self-driving tech has to overcome.


I'm really shocked at the trend in comments here.

Yes, we get it. Fatalities will happen sometimes, they are unavoidable sometimes, and what matters in the long run is if we can achieve a significant overall reduction in fatalities.

But my god, a person was just killed by a computer. Can't we have some compassion and humility?

Let's, as a community, set the standard for how we will react to these events. Let's make sure Uber releases detailed data on what happened, whether they were at fault or not. Let's hold the media accountable for their reporting. Let's mourn the loss of life and think about how we can solve these problems.

But for christ's sake, please stop posting the same thing everybody already knows, which gets posted on HN whenever a self-driving article comes out.

Can't we do better?

---

EDIT: I clarified what I meant here [1]

Wasn't trying to say this thread should be all about mourning. This is HN so we should talk about technology, even when a person died. I'm pointing out one specific argument that gets repeated over and over in place of a substantive discussion, and I think we can do better.

[1] https://news.ycombinator.com/item?id=16621589


What trend in the comments is bothering you? It's a little unclear from your comment.


The trend I'm seeing is to immediately defend self-driving cars based on "but if you look at it statistically I'm willing to bet this is better than human drivers".

It is as if HN users expect the news headline to be: "an event occurs which is statistically speaking non-anomalous".

If one doesn't feel we have enough data to draw conclusions yet, then sure, fine. But a person just died. This is a newsworthy story and it deserves our best attention and discussion, not repetition of something obvious ad nauseam.


The root problem is that that there isn't enough specific new information to slow down the discussion and make it substantial. Lacking that, people recycle the generic points. Meanwhile the story is a dramatic one, so the energy to discuss it is high. That's a double whammy: lots of energy driving the gears, but no grist to slow them down. This produces reams of commodity comments. I'm sure the story being at #1 amplifies the effect, too, so it's a triple whammy. But it doesn't make sense for the story not to be there.

When the sensational-to-informational ratio is so large, we always get this.


> The root problem is that that there isn't enough specific new information to slow down the discussion and make it substantial. Lacking that, people recycle the generic points.

Is there a term for this tendency? Curious to read the research on it.


It would be great to have just the right term for it, because then there would be a better chance of the community absorbing the idea, and that kind of feedback loop is the only thing that can alter behavior. So far I don't know of one. I use the distinctions 'specific vs. generic' and 'reflective vs. reflexive' in moderation discussions a lot, but those don't quite cut it.

Alan Kay pointed out that Kahnemann's slow-system vs. fast-system also applies here.


I hereby propose the Dang Double Whammy Effect.


"Bikeshedding" should cover it if there isn't a more specific term. If not, then bikeshedding will.


>> not repetition of something obvious ad nauseam.

The fact that Uber's self-driving car program has a worse deaths-per-miles-driven metric than human drivers is anything but obvious to me.

After an incident like this, it's natural to question whether anything needs to be done policy-wise. And, I don't see any reason why that discussion can't take place here in the comments section.

It seems like you want HN users to use the comments section to express condolences? Or, to just have an empty comments section?


I still don't understand what specifically you're objecting to (at least I think I don't). As best I can tell, you don't disagree with the points about statistics but think the discussion is somehow "without compassion", which stops it from being "our best discussion".

Do I understand you correctly that you'd be okay with the comments if the were just prefixed with some "thoughts-and-prayers" genuflection? And if so, why not say it directly?


I think dang described it best here [1]

It's not the lack of compassion, it's the lack of substance.

And I suppose it strikes me as lacking compassion and/or humility to come and use HN as a platform to recycle the same obvious idea, when the story is about a death. It would be different if it were about a tire blowing out.

Everyone knows how these statistics work on a macro level. That's not worth commenting here on HN, and especially not on this story.

I just think we could do better.

[1] https://news.ycombinator.com/item?id=16621716


I read dang's comment before posting. It still didn't help -- I can't come up with a mental model that generalizes the criticism into actionable rules for improved discussion (except as noted below).

>Everyone knows how these statistics work on a macro level. That's not worth commenting here on HN, and especially not on this story.

If statistical comparisons were obvious, and people didn't overreact with unjustifiable countermeasures, then indeed it wouldn't be important to make the point. But the world we live in is one where nuclear deaths are scarier than heart disease deaths, and shark attacks more than auto collisions.

I think you're committing the fallacy of "my social circle doesn't make that mistake, therefore it's not worth pointing out."

I can agree that users should be more careful about making the common point when others have made it, though -- I had a reply on this thread about "Uber flouted the $150 license" that I deleted when I saw others made a more informed response to the same point.


Fatalities will happen sometimes, they are unavoidable sometimes, and what matters in the long run is if we can achieve a significant overall reduction in fatalities.

I haven't even read any of the comments on this story yet, and this type of pro-self-driving-car response is what I expect to be in the majority. That and anything anti-Uber :)


I find what might help those that quote statistics is to self-reflect and ask the question, how would you react if your father/mother/sister/son/daughter/etc was the victim?

Would you care that statistically self-driving cars have killed less people than human drivers?


Well, just because people react badly when confronted with personal tragedy doesn't mean you have to do the same when it's not applied to you.

For example it's clearly excusable to be angry / depressed / looking for someone to blame when a personal tragedy happens to you. But that doesn't mean you should replicate that behaviour when a tragedy happens to someone else. We should distance ourselves from subjects where we're emotionally compromised, not guide our decision by the example of emotionally compromised people.


> …statistically self-driving cars have killed less people than human drivers?

I realise that the point you're making is about emotional turmoil and not really statistics, but it looks like so far Uber aren't doing so well in terms of driving safety:

https://news.ycombinator.com/item?id=16620736

So yeah, if it turned out that Uber were sending cars out into the world without sufficient engineering or testing* and they killed someone (especially someone I know) then I would be understandably furious.

* In order to get to market sooner? I don't trust Uber and I wouldn't put this past them. Nothing I have seen of the Uber self-driving car program makes me want to be anywhere near one on a road.


> Would you care that statistically self-driving cars have killed less people than human drivers?

I don't think there are enough data points right now to make such claim, but please correct me if I am wrong.


All deaths are a tragedy to someone. A statistic is an aggregate account of untold suffering and misery. To be truly compassionate, we must learn to see the humanity in statistics. We must recognise that our emotional reaction to the death of a loved one and our indifference to the death of a stranger is not a sign of compassion, but egoism. As enlightened citizens, it is our duty to use all our ingenuity and effort to reduce the sum total of suffering.

This death is a tragedy. There were 37,461 such tragedies in 2016. Every single one of them matters equally. It is only by studying and analysing them in aggregate that we can prevent future tragedies. Seat belts, air bags, crumple zones, anti-lock brakes, median dividers and rumble strips are tremendous acts of compassion. Statisticians, scientists and engineers have made an incalculable contribution to human welfare.


> how would you react if your father/mother/sister/son/daughter/etc was the victim?

Shit, of course. But I'd like to believe that I'll realize that banning self-driving cars is not going to make things any better, just like I'd never propose banning cars in general. I'm not sure how it would help here to imagine it's my daughter; it's not as if we imagine it being our daughter in every traffic accident we hear of, since that would be crippling given the sheer number of them...


Would you care that seatbelts are statistically likely to save lives if your loved ones died drowning in their car due to them?

What do we do differently based on the answer to that question?


Yes. I'm in favour of safer drivers, robot or human.


> how would you react if your father/mother/sister/son/daughter/etc was the victim?

How you would react when emotionally distraught shouldn’t be the model for how you act all the time.


This. And I think this is why these stats end up posted here when stuff like this happens. Because the emotionally distraught look at this situation and they're like BAN SELF DRIVING CARS! and the people posting stats are here to remind them that they're still safer than regular cars and accidents happen but we can't impede progress because of one accident here and there.


Could you please not use allcaps for emphasis in HN comments?

This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.


I didn't realize that, it's been a while since I read those. Thank you for the information!


I think that's a strange thought experiment, since it's basically requesting that you think emotionally and as irrationally as possible to draw conclusions.


It's requesting that you remember that a human life was just lost, and to not reduce this woman's life to a rounding error.

The only strange thing in this thread is the emotional detachment many are showing and one could probably argue that's the exact kind of thinking that will lead to more cases like this.


The news isn't about the fact that a human died, unless I'm somehow missing the constant flood of ~50 news items a day about other deadly car accidents. The news is very much about self-driving cars and our relation to them. That's what people react to. The human death is tragic, but not really the topic.


Lately, I'm quite disappointed with the tech community's approach to the society.

Lot's of people are talking as if human beings are a resource which exploitation should be secured for the businesses and technical curiosities.

No, people, humans are what matters and it just happens that there are business opportunities and scientific curiosities. No, it not O.K. to kill few people to speed up the development of self-driving vehicles. No, it is not O.K. to have total surveillance so that we improve AI.


It’s not just you, it’s a lot of the country that’s rapidly going from concerned to pissed. If they read more threads like this, they’d probably already be coming for us with torches and bad intentions. They’re worried that tech is full of disconnected, vaguely autistic (and they don’t know what that means beyond iRobot), too-smart-for-their-own-good people who care more about making money than poeple. In my experience that’s incredibly unfair, but it only takes a small percentage of loud and callous voices to make us all look bad.

We need to start cleaning our own house, or it’s going to be cleaned for us by people with agendas and little understanding of what they’re cleaning!


Do better than what? This is a technology news forum, the relevant question is how this affects the technology. People are going to talk about the potential long term trends in self-driving technology. What exactly were you expecting?


"The relevant question is how this affects the technology"

Is arguably the whole problem with our industry in a nutshell. The more relevant and largely ignored question should be "how this affects human beings".


This just seems like a rhetorical trick to me. Obviously we're talking about humans, it's just through the lens of technology. The website has to be about something in particular besides "aren't humans great??".


Actually I think rhetoric matters enormously - how you frame a question affects how you come to think about a situation.

If you ask a question "through the lense of technology" ("Can we solve this hard computer-vision problem"?) you arrive at different answers to those you would get if you ask the question through (say) the lense of societal benefits or humanity's relationship to technology ("Is it ok for a badly programmed computer to accidentally kill someone"?).

I say this not to criticise the enormous benefit this technology could bring, but rather to provoke us to think about the implications of the technologies that we are creating outside the confines of an IDE.


I think the relevant question must be how the technology is going to affect lives.


Okay, and is that not what is being discussed here? I'm just really confused as to what the OP is complaining about.


I think what bothers me is that the whole story gets dismissed by one idea: that this event doesn't matter, it's only the overall statistics that matter.

That this event doesn't necessarily reflect a larger trend is not incorrect--it's just that it's obvious. That self-driving cars could potentially reduce fatalities in the future is not only obvious, it's also somewhat off-topic when the story is that a person just died.

So my question is, when a computer kills a person, can't we do better? Better than stating the obvious, and better than being dismissive.


I'm far from being an Uber-supporter here but I think you're overreacting. As others have pointed out, this story is only relevant to HN (and to the national media at large) because it involves Uber and AV, not because it involved a pedestrian dying. It makes sense that the discussion centers around policy and systems since few people here can claim to have even known the victim. Nor does participating in this discussion preclude anyone from going to the victim's Facebook page/funeral and expressing condolences.


What is that supposed to mean? If the only way it directly impacts a forum is because of the company doing the killing, the killing is irrelevant?


Uh, no. Sorry, at a loss to understand where you see "killing is irrelevant" in my comment.

edit: NM, I can see where you'd see that, even though I argue that both being concerned about the death, and wanting to defend Uber, aren't mutually exclusive. Instead of saying "this story is only relevant to HN", I should have said, "this story is only noticed by HN and the national media". Which is true, as far as I can tell. I can recall few nationwide stories about other pedestrian deaths, even though nearly 6,000 occur every year.


A person was not killed by a computer. There was a person behind the wheel responsible for taking control of the car in a situation like this. If this is a computer killing someone, then every instance of a person being killed while cruise control was engaged also counts.


Yeah, someone put that poor person there despite all warnings that humans would perform terribly as standby drivers once the AI becomes good enough to not require interventions all the time.


> But my god, a person was just killed by a computer. Can't we have some compassion and humility?

You have to recognize that your culture and your moral standards are not universal and you are in no position to force them on everyone else. It is also foolish to expect everyone across the globe to react in a certain way to certain things.

On the other hand, death topics is the side of HN that is heavily biased towards what mods feel about it and their culture. They even ban accounts and give out warnings if you happen to disagree with them and react differently. So there is not much sense to even discuss this.


You are seeing the Trolley Problem Dilemma[https://en.wikipedia.org/wiki/Trolley_problem] in action. Majority of respondents when asked that problem would choose to let a few people die to save a larger number of people. Emphasis being on "let a few people die" and not "kill a few people", machines (i.e. lever in that problem and autonomous car in this scenario) somehow evoke a more logical and less emotional response, I guess machines create an emotional distance between the people who would die and the respondent. If the same respondents would somehow feel more responsible for the death, i.e. the machine is removed from the scenario the response would be much different.

Not implying that the lack of empathy is in anyway acceptable, but people should know that when machines are involved in such scenarios it is much easier for them to distance themselves emotionally, this is not new, weapons are a prime example of this. Maybe by knowing this dilemma people can react better to such tragedies.

I hope people show more empathy knowing this.


I think the argument needs to be made for ETHICAL development of mission critical software. The Aerospace industry has been writing software that has peoples lives in their hands for years. NASA has been doing the same.

We need to see accountability for any death caused by a car being operated by a computer. We need to see integrity in testing and metrics that show that these are actually safer for humanity. One more death than expected due to software vs human is too many.

I don't remember where I saw it but the process for creating mission critical software at NASA is insane. They analyze everything. It's a completely different and slower process than the one we use for building websites. SpaceX has probably increased the productivity of said systems, but I'd like to see how they develop mission critical software to keep things safe.

The point of self driving cars is to reduce death, if you can monetize that great, but if you can't hit that baseline in some sort of testable way, then you shouldn't be able to put your car on the road.


I understand your point. But people are capable of recognizing the tragic loss of a life while also trying to understand both the cause of it and the implications.

The victim's life had meaning. To herself, to her friends and loved ones, and all of those affected by her untimely loss. We don't know her story, nor the stories of the over 6,000 pedestrians (a number that's been increasing) killed annually in the US alone as of 2016.[0] Nor the 37,461 killed in US motor vehicle crashes in the same year.[1]

We talk about the technology, because it was involved in this woman's death and because the technology has the potential to drastically reduce the overall number of such accidents. Understanding why she died, and how similar accidents can be prevented in the future, can give society back something out of an accident that only took. If we can learn from her death, anything at all, it behooves us to do so.

Yes, it's cold. It's extremely cold. But it's something doable. Even if these discussions offer little impact on the actual development of self-driving cars, they can still impact the political debate that's only now just starting. For better or worse, there will be those who point to this woman's death as an argument against the technology. Recognizing that her death is not simply one death, but an example of a type of incident that claims thousands each year without self-driving car involvement, is critically important. At times, statistics can dehumanize and cause us to overlook the very real pain depicted as a small part of a number. But at the same time, they also permit us to recognize problems and measures ways in which we can mitigate and even solve them.

0. https://www.npr.org/2017/03/30/522085503/2016-saw-a-record-i...

1. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/...


What's noteworthy of this news event is the involvement of an autonomous vehicle.

People are being killed in accidents constantly. We aren't going to be in a constant state of mourning over it.

This is implicitly a discussion about autonomous vehicles, and the very currently relevant and heated topic of wether we want these things on our streets.


40100 deaths in 2017. Averaging 110 per day.


According to this[0], I reckon there are 16 cars (order of magnitude 10s at least) in the pilot down in Az. In almost a month, one has died.

There are about 260M[1] cars in the us in 2015, so perhaps more today. That means there are ~4e-7 deaths per car per day. On the other hand, we have 1e-3 deaths per autonomous vehicles per day if we naively use the death of the person in this story. That means so far, autonomous vehicles have a rate of killing people a ~2500 times that of normal cars.

Of course, it's one data point, but out the gate, it's not looking so strong.

[0] https://www.theverge.com/2017/2/21/14687346/uber-self-drivin...

[1] https://en.wikipedia.org/wiki/Passenger_vehicles_in_the_Unit...


What bothers me is that people are jumping at all sorts of narratives and conclusions when the article clearly pointed out that the incident is still under investigation.


re: your edit. I agree with the sentiment, but I didn't quite get your comment because before you posted it, the most upvoted comment/discussion on this story was about Uber failing to meet safety standards [0]. People have been posting about "the greater good" but seems far from a consensus.

[0] https://news.ycombinator.com/item?id=16620042


1.75 people die per second. Calling for compassion and mourning of one stranger over the others seems fake. Personally I'm pleased over the circling of wagons, the very real danger is emotional kneejerks to this will set back the entire self-driving industry by at least 5 years. The trend of comments is a helpful loyalty signal even if it's not very substantive.


Reminds me about this comment a few weeks ago (which made an impression on me) :

>'How low has the SW development bar gone, if "it's okay" now means "at least it's not directly killing people"?' https://news.ycombinator.com/item?id=16541235

We now see the standard has slipped to: "Is the death of one person relatively so bad? People die every day."

Next a large crowd of people will be killed and the standard will become: "It's not so bad compared to creating skynet or starting thermonuclear war."

I'm not sure what comes after that.


I don't think the people arguing for a different standard are saying that it is ok for one person to die because people die every day. They're asking for observers to look at self-driving cars in comparison to human-driven cars. Because people dying is such a bad thing that we want to prevent, the relevant question should always be, "will this change make it more or less likely that somebody will die." And that is why it is relevant to compare pedestrian deaths from self-driving cars to pedestrian deaths from human-driven cars, but not relevant to compare it to skynet or thermonuclear war.

I don't know, however, how that comparison turns out. From the other comments, it looks like Uber's self-driving cars might be less safe than human-driven cars--in which case you and I would end up on the same side of the argument.


Sure, but in the oculus rift comparison someone could have easily died because the VR surgery equipment failed for a reason so stupid as a certificate expiring during surgery.

You can make the same argument that people die in non-VR surgery all the time so it is no such a big deal.


You're absolutely right that designers, regulators and, in some cases, courts should take every death (and accident/near miss) seriously. But it seems to me like there are two questions here. The first question is, "should we have self-driving cars?" and the second is "how can we make those cars as safe as possible?" When people say, "compare it to a human driver," they are suggesting a way to answer the first question, not the second. And that does not mean that they think the second question is "no big deal."


It's also worth noting that the quote he presented from the Oculus Rift thread was criticizing someone for pointing out that it is less concerning if surgical training equipment crashes than if surgical equipment crashes, not for saying that it is no big deal if someone dies in surgery. I didn't see anyone in that thread saying it would be ok for VR surgery equipment to fail.


Interesting, isn't it?

But if you think about it, that's not a pattern anyone should be unfamiliar with. History is full of it.

I don't know, I really have no strong opinion here. But I think stuff like this is truly fascinating. There seems no end to where rationalization can go.

Welcome to the ambivalence of reality...


"It's not so bad compared to the alternative"

I mean, we two are still capable of discussing it, at least!


While you're admonishing people for their comments and employing stealth implicature like, "let's make sure Uber releases detailed data", do you think that maybe you should include a prominent notice disclosing your (and your former colleagues') ties to the industry that Uber is operating in—most importantly as a competitor?


Yikes - bringing someone's personal details into threads as ammunition in an argument is offside. It crosses into personal attack, so please don't.

https://news.ycombinator.com/newsguidelines.html


That's not at all what happened here (least of all because I'm not the person who was trying to advance an argument).


I'm not sure I understand, but the high bit here is that bringing in someone else's personal details crosses into personal attack and isn't ok.


> Can't we do better?

no. there's a reason our community has the reputation of being heartless. it's because it's true.


It's not at all true. People here care very much. It would be weird if a large population sample (4M a month or so) had a different human nature than humans in general.

What is true is that large internet forums suffer from systemic effects that compound into problems that no one has yet figured out how to deal with.


> large internet forums

what in the world did you think i meant?

you can try and pretend all you want, it ain't my fault if the mirror ain't a pretty thing to behold.


>Fatalities will happen sometimes

How many self-driving cars are there on the roads right now? And what's fatality rate? I' quite sure it's higher now than compared to human-only driver.

> they are unavoidable sometimes

Were autonomous cars not designed to prevent such situations?

>Let's, as a community, set the standard for how we will react to these events

Yes and let's punish non-conformists!

> which gets posted on HN whenever a self-driving article comes out.

I honestly have no idea what that comment is.


Probably not since Google drove millions of KM without a single fatality.


As of late 2017, Google/Waymo had recorded just 4 million miles of real-world self-driving in its entire history: https://techcrunch.com/2017/11/27/waymo-racks-up-4-million-s...

NHTSA data measures number of fatalities by the hundreds of millions of miles driven -- the latest rate being 1.18 fatalities per 100 million miles: https://www.nhtsa.gov/press-releases/usdot-releases-2016-fat...


Yes, absolutely, let's ostracize people who react to this situation (a machine killing some random person in a fatal beta test) with anything other than compassion and humility.

Those are not people I want in my engineering community; just impossible to trust they'd make the right calls and have the right priorities in developing cool new stuff.


https://www.abc15.com/news/region-southeast-valley/tempe/tem...

"The fleet of self-driving Volvos arrived in Arizona after they were banned from California roads over safety concerns."


I wonder how software updates and are validated and tested before going to production in autonomous cars. This is scary - imagine being a developer responsible for a similar malfunction. I'm not suggesting that was the case here - just think this might need as much validation as medical equipment (hoping that it won't stifle innovation).



We changed the URL from https://www.wcpo.com/news/arizona-police-investigating-self-... to one that is longer and doesn't autoplay a video.


Both the computer and the driver failed to avoid the pedestrian. It's possible that the backup driver wasn't paying attention and the technology for autonomous vehicles is not ready. It's also possible that the situation was challenging for both the computer and human and that both failed.


I wonder how many different "cultures" do drivers have?

People from different localities drive differently. I hope they use drivers from say, NY, LA, and various small towns to teach the AI.

People that lives in towns with ~ 10k people drive differently that a new yorker on the same town.

EDIT: They might be training with a biased sample



I think the test cars should be covered in layers upon layers of foam or another kind of shock absorbing material. This should reduce the collision impact at least a little and can be the difference between someone getting killed and escaping with injuries in some cases.


This comes just few days after this: "When an AI finally kills someone, who will be responsible?" url: https://news.ycombinator.com/item?id=16584862


This seems like a good time to question if we really should have cars at all. Autonomous cars might be better in some ways but they are still tons of metal flying at high rates of speed. Accidents will happen and people and other animals will die on a regular basis. Cars also require tons of infrastructure that walking and biking don't. You also have the environmental destruction of GHG emissions and from the mining of rare metals needed for batteries, you have roads and parking lots destroying habitat and using up valuable space, air pollution and smog from tailpipes or energy generation, noise pollution, and more harmful effects. Autonomous cars might mitigate some of those things but most of it will still be there (or we'll just use cars even more and so we'll have just as many deaths total along with the rest of it?). What about doing research and testing a world without any cars? It will probably be a better one.


Paris has greatly reduced traffic, thinking along these lines.


"Uber is now the target of at least three potential class action lawsuits, at least five state attorney general investigations, and an inquiry by the FTC because of Sullivan’s decision to pay off hackers and the cover ups."

via twitter.com/adamscrabble


I feel like I should share this here.

The first person killed by a car:

http://www.guinnessworldrecords.com/world-records/first-pers...


There are roughly 3e8 cars in the US, they kill 3e4 people a year => each car kills 1e-4 person a year. If we have say 100 AV right now in US, each car kills 1e-2 person a year. So that's 100 times more dangerous than humans...


That's not a good comparison because self driving cars are on the road a lot more often than regular cars.


yep, sure. I did that computation to have a rough estimate. Also, with on or two fatalities, I guess the confidence in the stats is not very high.


If the car is deemed non-responsible it makes it the perfect crime...

How long before someones tries to program a car to "hunt" a victim and hit her when she makes a mistake and it will appear it was a pure accident with no-one at fault?


I once read somewhere that privacy issues (which are very important to us, IT people) were not treated enough because they didn't have enough blood on their hands.

At least with automated cars, there will be blood.


As someone who cannot drive because of a disability, I am putting real hope into self-driving cars becoming a thing during my lifetime. Uber may've just set that back a decade.


And now the costs of developing autonomous cars has been socialized to those unfortunate enough to be walking in front of them while they learn.

Perhaps her estate should receive a perpetual royalty.


Just curious: Do tech companies “pay” the state so they can try out their latest and greatest? Or do they look for states with the least amount of restrictions?


Isn't this just lobbying?


Could this help Uber?

American's seem to enjoy the "X doesn't kill people. If only more people had X, less people would have to die." argument.

<braces for downvotes>


So, who is going to jail (or at least - trial) over this?


If you skip the emotional part of the news, the interesting question is: who is responsible when a self driving car made by a company kills someone?


I suppose speculators will speculate - but c'mon. just wait and we'll have video and telemetry and this whole conversation will be moot.


Those who say it's not safe enough: given that no engineered system is 100% safe, when/how can you say that a system is safe enough?


Given that so many of autonomous car accidents logged by the California DMV seem to be the fault of the human driver trying to take control, I’m surprised that the accident here reportedly happened in autonomous mode, which for all its flaws seems to have been good at not hitting things in front of it. In fact, seems like most of the CA accidents were the autonomous car being rear ended or side hit from other cars, usually for going too slow or stopping unexpectedly.


Everyday I see crazy, drunk or angry people just walk across the road recklessly. Some seem to walk in defiance of the cars.


So, what's the statistic of actual self driving cars killing people vs the actual people killing others or themselves.


When will this company just go away ... after everything it's done ... it's now killing innocent people!


I'm paranoid when I'm on a bike. Any road with a dedicated bike lane may be classified as bike-friendly, but I can't consider it so if cars are on the same road. IMO, the only bike-friendly routes are where there are no cars. I wish pavements could be made wide enough for bikes everywhere, so that bikers could be on the road only for a limited amount of time while crossing intersections.


> Elaine Herzberg, 49, was walking outside the crosswalk on a four-lane road in the Phoenix suburb of Tempe

Although I agree with your statement in general, it sounds like she was already walking her bicycle while crossing the intersection, and the fact that she had a bicycle was a moot point.


There were an estimated 40,100 motor vehicle deaths last year, or a drop of 1 percent from the prior year.


Just out of curiosity, how many people have been killed by human-driven cars on the same day in Arizona?


My guess is 2 or 3 deaths that day. Okay, according to the 2016 Arizona Crash Facts Summary [0], there were 952 fatalities on the road that year, including 193 pedestrian fatalities and 31 pedalcyclist fatalities.

So, on any given day in 2016, there were 2.6 fatalities, with less than 1 in the pedestrian/pedalcyclist groups.

[0]: https://www.azdot.gov/docs/default-source/mvd-services/2016-...


Uber's self-driving program seems to be one of the most unethical, most poorly-run in the entire industry. There were plenty of reports before warning us of the questionable quality of their research: cars changing lanes suddenly, blowing red lights, and now one finally killed someone. Every prior incident should have been treated as a serious matter.

Waymo et al has had none of these issues. Time to revoke their licence to test?


> Uber's self-driving program seems to be one of the most unethical, most poorly-run in the entire industry.

I mean, it's Uber. I'm not sure exactly what people were expecting. Their entire schtick is being unethical and poorly-run.


Yeah. Their motto seems to be 'easier to ask forgiveness/pay fines instead of asking for permission'.


Irrespective of the ethics, the plan "worked" in that they managed to pressure local governments to change rules or otherwise let Uber operate


They did manage to get some local governments to do that, yes. I’m not sure how to interpret your comment other than as an apology/excuse for their illegal and unethical behavior.

If there’s more to your thought than “the ends justify the means,” I’m curious what it is.


Uber has been rewarded for employing schticks that GP called "unethical and poorly-run." Is it unethical? Yes, and I strongly believe that "Move fast and break things" is wholly inappropriate when people's lives are involved. Is it poorly-run? I think Uber is actually well-run if you accept that "ends justify the means" is in their DNA.


What would compel me to accept that an ethical standard that can be used to justify any amount of immoral and unethical behavior is “in a company’s DNA?” Why would I grade them on that curve?

This is like saying the Duterte is a very effective anti-drug crusader if you accept that murdering innocents and drug dealers is in his DNA.


comma ai must be so jealous


It's no less immoral in programming languages than in real life, precisely because it tends to slip from the former into the latter.


[flagged]


Not intentionally programming them to run red lights, just intentionally or unintentionally not taking necessary precautions to prevent it from happening. They can get it right, or have it right now, and it's pretty clear which side they've chosen.


Nobody is suggesting they're doing that. They're saying Uber cuts corners to save a dime among other unethical practices (like stealing research from competitors).


Uber is embodiment of its employees. It is a flame that the unethical moths are drawn to, so they can realize their vision by the means that delight them. Uber is the Phoenix of Enron.


Its employees operate within the culture and norms established by its owners and investors.


[deleted]

[sorry, dang. you're right.]


Please don't reply to a bad comment with another bad comment. That just makes this place worse for everybody.

https://news.ycombinator.com/newsguidelines.html


You're right. Thanks for calling me out.


They stand to gain by being quicker to market by failing to instantiate adequate safeguards.

Criminal negligence, not direct malicious criminality.


Not a bad business plan. The market rewards risk. I’ll use Uber as an example.


This is why unregulated growth makes cancer the perfect metaphor for corporations.


The art of being cheap and the virtue of being lazy oft end with someone getting hurt, and Uber embodies both of those quite well.


Not intentionally doing that, but, intentionally ignoring bugs.

"Sir, we have several bugs, one of them would cause the car to run a red in the following conditions... [fictional made-up condition: in case it sees a clown car with Brad Pitt wearing a pink dress in it]".

"What are the chances of that ever happening? Ship it!".


I somewhat doubt they even know or care that there might be bugs before they do something awful. It's like they're using the public highway as their simulator.

I don't know if this job ad for a self-driving simulation software engineer is still current, but it is somewhat worrying given the broad scope of the role they appear to be advertising for.

[0] https://www.uber.com/en-US/careers/list/27029/


I think it's more what Uber stands to gain from not ensuring that their cars are programmed to not run red lights. Which is first-to-market advantage.


No one said that this was their intention.


I don't think anyone thinks it's intentional, just that they're a poorly run company and have been less diligent in fixing bugs and ensuring safety.


Which I'm thankful for since I now have access to taxis because of them.


This comment is in spectacularly poor taste given the subject of this news story.


IIRC, Uber initially launched its self driving cars by flouting California law: https://www.google.com/amp/s/www.cnet.com/google-amp/news/ub...


EDIT: Yeah this was a bit reactionary on my part. I get it, it's not over the $150.

It tells a lot about Uber that they weren't willing to fork over $150 for a permit.


It had nothing to do with the $150, but part of accepting the permit is taking legal responsibility to report disengagement from autonomous testing. Uber didn't want to share this information since it would make them look bad.


It does say a lot about Uber, but clearly the $150 was not the issue. They didn't want to report disengagements.


It's more likely that they couldn't be bothered with the associated process than that they couldn't afford the $150.


It wasn't about the $150, it was about not sharing data about the status of their program. I'm not sure which is worse.


We should wait for the results of investigation first. It could easily happen the killed person did some bonehead move that resulted in unavoidable crash. I was once told how a friend of mine fatally hit an older man that decided to cross the highway quickly, giving him no chance to react and a life-long trauma.


Calls to wait for facts never stopped the internet outrage machine in the early moments of news release. I doubt it will do much here, sadly. But it's worth pushing back against, even if it won't work IMO.

People will always relish in their biases being confirmed... even after a rare one-off event with a sample size of 1 in a burgeoning industry.


Yup - I've personally witnessed one of these almost zoom through a red light at an intersection and was appalled. I'm honestly not surprised...in their effort to race to the bottom Uber's autonomous vehicle program has been extremely aggressive - to the point of total disregard for safety. Their business model may be built on pipe dreams of autonomous travel (without drivers to pay), and hopefully this shutters that. In fact, the whole company should probably be shut down IMO - but I digress.


Agreed. I've been in the cars in Pittsburg and the operators laughed about how poorly it performed on certain roads. Slanted telephone polls caused the vehicle to literally slalom down the road.


> Waymo et al has had none of these issues. Time to revoke their licence to test?

I think Alphabet is to blame here too in a sense (or at least the industry as a whole). Why didn't they push for more strict testing rules? It's in their interest too.


This is how cars came to be on the road in the first place. Despite the fact that they were erratic and deadly, and spewed noxious gas everywhere, the manufacturers and automobiling clubs succeeded in convincing everybody that roads were for cars. Uber may well be the worst actor in the self-driving industry, but the industry as a whole is going to need to invent new crimes akin to jaywalking. This will allow it to shift blame to those killed by autonomous vehicles, just as jaywalking implies pedestrians are to blame for being hit by manned vehicles.


Fully agreed regarding Uber's ethics, but...

> Waymo et al has had none of these issues.

... absence of negative media coverage doesn't prove anything.


Yeah I missed that but it doesn't look good [1] (december 2016).

[1] - https://www.theguardian.com/technology/2016/dec/19/uber-self...


Par for the course in regards to Uber.


We do not yet know whether another autonomous vehicle or human for would have done better in this situation. THAT SAID, asshole company fields asshole autonomous driver. Did we expect differently? Uber cuts every corner.


> THAT SAID, asshole company fields asshole autonomous driver. Did we expect differently? Uber cuts every corner

Exactly. Fool me once, shame on you; fool me twice, shame on me.


Uber consistently has some of the highest disengagement rates reported in California, their cars really are just worse than everyone else's.


What does that mean in practice exactly? Does it directly imply that pedestrians are at a higher risk of being hit?


It means the cars are sufficiently dangerous in general that the human drivers have to constantly take over manual control to avoid accidents. So they are dangerous to someone. Apparently the human didn't make it in time this time, and turns out that someone was 'pedestrians/bicyclists'.


As a pure safety judgement that'd depend on:

a) what % of disengagements are 'to avoid accidents' vs navigating an entirely safe but complex situations. For ex: a parked UPS truck on a narrow 2 lane rd.

b) the degree of real-world complexity the tests the cars are engaging in (compared to other vendors). For ex: testing cars on simple suburban routes = fewer disengagements.

Not trying to defend Uber here, I'm just trying to understand the distinction between a high-level general statistic and it's real world implications.


A complex situation == a dangerous one for a prototype system. If it's complex, the car doesn't know what to do or has any reliability, and it makes the situation dangerous even if it were 'complex but safe' for some hypothetical human driver. The uncertainty is in the map, not the territory. Waymo checks disengagements in simulators to see how many are truly dangerous, and most aren't; Uber, for some reason, declines to discuss whether they do so and what the results are. Much like they declined to tell the truth about incidents like running a red light at high speed.

There is no reason whatsoever to believe that Uber's horrific disengagement record, which is orders of magnitude worse than competitors often in the same cities like SF, is because they are tackling orders of magnitude harder situations per mile or are orders of magnitude more conservative, and every reason, even before this fatality (the only one so far despite many self-driving programs running concurrently over years), to believe they were just plain worse.

> Not trying to defend Uber here, I'm just trying to understand the distinction between a high-level general statistic and it's real world implications.

Again, it has to be dangerous for someone. It can't be more dangerous yet not dangerous to anyone in particular. That just doesn't make any sense. Uber runs on the same roads and the same traffic with the same basic approach as everyone else, there's no confounder which could produce a reversal.


You'd think Uber would be extra-vetted before they were allowed to do something like this.


Their San Francisco testing program didn't even have a license to test because they didn't want to pay for a permit.

https://www.theverge.com/2017/2/27/14698902/uber-self-drivin...

Nothing to revoke if they never had a license to begin with!


Not sure how that regulation would have made their cars safer in a way that's relevant to this accident?

Local governments have stopped other self-driving companies before with cease-and-desist letters with threats of serious punitive fines. I highly doubt by not buying a $150 license Uber would suddenly have carte-blanche to do what they want, that's not how it works.

If the state gov was really interested in stopping them they easily could have regardless.


Arizona had 10 pedestrian deaths in a single week: https://www.azcentral.com/story/news/local/phoenix-breaking/...


As somebody else pointed out, what is the ratio of total miles driven in metro Phoenix over fatalities ?


Did the person riding along attempt to take control? I wonder if they will get held responsible..


If only there was this much outrage and discussion every time a human driver killed a pedestrian.


there is a standard in motor industry called misra, because the target user is human you have to met some certain safety level. Otherwise bad things happen like this.

Maybe self driving industry needs to have some safety rules too to prevent when the unexpected happens.


The various reports on this aren't clear- did the car hit a pedestrian, or a cyclist?


Tempe Police say woman killed by self-driving Uber car was pushing bicycle across street when struck.


I don't see why this lesson could not be learned on a test track or simulator.


Headline should read 'woman crossing street kills electric cars'


Sad to hear it's happened this soon but this was inevitable.


Is this the first death ever?

(Do we count Tesla's autopilot as self-driving?)


Hm, there have been other crashes where the 'driver' in the car died (I recall one where a car in autonomous mode rear-ended a truck, something about how the sensor was blinded by the setting sun? And it had alerted the driver to take control several times before the crash) but this is the first incident I can recall where a pedestrian was struck and killed by an autonomous vehicle.


IIRC in that case the driver T-boned a truck crossing the road, because the side of the truck was so shiny that it read as the horizon to the car's sensors.


In the Tesla one the sensors did not recognize the white trailer of an 18 wheeler against the sky from news reports.


I believe it's the first "third-party" death.


Uber is responsible for this, and corporations are people, right? That reminds me I have an episode of opening arguments to listen to on the subject.


There are roughly 3e8 cars in the US, they kill 3e4 people a year => each car kills 1e-4 person


This will be the first major test of liability and insurance with a self-driving vehicle.


Why am I not surprised the first death of a pedestrian came from a Uber driven car.


Ubye


"Uber has paused self-driving operations in Phoenix, Pittsburgh, San Francisco and Toronto, which is a standard move, the company says."

...there's a "standard move" when one of their cars kills someone?

I don't think that really came across the way they wanted it to.


This may not have been an unexpected circumstance. The "pedestrian outside of the crosswalk" report is likely wrong. There is visual evidence that strongly supports that this was a case of the Uber hitting a bicyclist while merging into the right turn lane.

Here's a picture of the crumpled bicycle (note dent on the front right of the car): https://twitter.com/daiwaka/status/975767445859287042

The sign it is lying next to lets us locate the exact point of the collision on Google street view: https://www.google.com/maps/@33.4370667,-111.9430321,3a,75y,...

It's right where the vehicle lane crosses the bike lane approaching the intersection. Very unlikely a pedestrian was walking a bike there. Much more likely the Uber hit the bicycle while merging.

EDIT: I take back "very unlikely" a pedestrian was crossing there. There is in fact a very weird "X" pathway in the median that pedestrians are not supposed to be in. I didn't see that until I looked at the satellite view.[1] So possibly she emerged from there and was walking her bike across the street. That still looks pretty bad for Uber though, since that street is a straight shot with clear visibility and the dent's on the right side of the car. Meaning it would have swerved left to avoid a late-detected obstacle rather than swerve right away from her entering the roadway. She also would have literally had to walk straight into oncoming traffic. I still think bicyclist is the most likely scenario and it's odd the PD doesn't mention the crumpled bicycle.

[1] https://www.google.com/maps/@33.4365931,-111.9425027,198m/da... (street view shows a "no pedestrians" sign)


He posted a tweet later, it wasn't a cyclist:

https://twitter.com/daiwaka/status/975771533745336320


Wait, so that picture / video of the bicycle on the ground at what looked like the scene had nothing to do with the accident at all? It seemed really focused on from my POV. I'd find it rather strange if the bicycle had nothing to do with it although this is just reporting, it's not like it's the official police report or anything.


CNN says she was walking her bike when she was hit


Ok, that's the middle ground I assumed it was showing. I really was hoping it wasn't a bicycle that had nothing to do with the situation.


Lots of confusion! It seemed the video was for an injury collision rather than a fatality? This is what we get from posting a Cincinnati source for an event that occurred in Tempe.


Wow, that's really bad! Easy to see how the code didn't factor in the possibility of someone using the bike lane, which is a very bad look for Uber.


It's not even about the bike lane. It's about not hitting a person. Beyond that, it's about understanding unforeseen circumstances.


Now, I'm not a fan of AV, but you're definitely jumping to conclusions here.


This is looking worse and worse for Uber.


This is making Kitty Hawk look a lot more attractive.


I can't imagine how bad it looks for Non Autonomous cars considering the 100's of pedestrians that are killed everyday in the USA.


"100's of pedestrians"? 2017 was a record high for pedestrian fatalities, at ~6,000:

https://www.npr.org/sections/thetwo-way/2018/02/28/589453431...

The urban fatality rate of all fatalities is 1 per 100 million miles traveled: https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/...

As of September 2017, all of Uber's autonomous cars have driven a total of 1 million miles over a period of 2.5 years: https://www.axios.com/ubers-autonomous-cars-have-driven-1-mi...

There's an argument to be made that autonomous vehicles are better for society and safety. But it sure isn't statistical.


considering the 100's of pedestrians that are killed everyday in the USA

According to the CDC, 5,376 pedestrians were killed in car accidents in 2015. According to the WashPo, 28,642 pedestrians were killed in car accidents from 2010-2015. That gives a rate of 13-15 pedestrians killed per day in the USA. Too much, yes, but not hundreds of pedestrians killed everyday.

https://www.cdc.gov/motorvehiclesafety/pedestrian_safety/ind...

https://www.washingtonpost.com/local/trafficandcommuting/ped...


They went out of their way to avoid regulation and after, ran over a person in a bike lane.

I was referring more towards their negligent design than self-driving cars themselves, but if we're going off of that:

In 2015 there were 5,376 pedestrian deaths by a vehicle. There were, at the time, an estimated 263.6 million vehicles registered.

There has been a death so far this year off of a self-driving vehicle, and there are how many on the road at the moment? If you're going by deaths-per-vehicle, autonomous is losing.

I don't dislike all autonomous vehicles, but Uber is showing a lack of care for all but basic safety measures, that seemingly all of their competitors have gotten down thusfar, and alongside of that they're avoiding regulation quite heavily.


100s a day is a little off, the figure is around 11 per day. Not saying that's great, though.


> You can't really get away from this without putting these cars on rails.

We should just really build rails. Self driving cars are dumping billions of dollars onto a problem better solved by rails. Several cities already have self driving trains including London, Singapore and Kuala Lumpar, and they transport millions of people every day without incident:

http://penguindreams.org/blog/self-driving-cars-will-not-sol...

Self driving car research is asinine. It won't even being to solve the transport problem in America and it's round-peg/square-hole tech. It has the cool factor but there are existing tech that are insanely more useful for a fraction of the cost. American hate public transportation (thanks GM) that we'll never see real/good transport implemented in America.


I couldn’t put my finger on it at first, but I then realized why your comment doesn’t make any sense - you’re effectively arguing “if I could master-plan society and the economy for a while, this is how I’d decree it to be.”

Maybe some people don’t want to live in dense urban areas, maybe they don’t want to ride on a rail, maybe they want privacy. But looking out the window, I can say American society has soundly rejected the rail as the ultimate solution.

Short of understanding your perspective as a longing personal desire for trains, your conclusion is so far removed from reality that it’s nonsense.


How can you reject something you never had or used? Almost nobody alive in the US today has ever engaged with general commuter rail. Trains as a common mode of transit died off with the rise of cars and planes post WW1 and had them near-eradicated post WW2.

You can't say people reject something they don't even know is a possibility.


I don't know why you're getting downvoted, but I agree with your comment ¯\_(ツ)_/¯


I wouldn't say society has rejected the rail in America, but rather that the political structure has rejected the rail. There's a dominant paradigm in public transport that it must be run by the government (which ironically is somewhat counter to the broader american ethos). Yet there are places like Singapore where the public transport is run via (in the past fully, but currently partially) for-profit, publically traded companies, where the QOS is far higher than the US, and the service is markedly cheaper, and I would say, generally more accessible than it is in the US.

There would be considerable US pushback against such a system, sadly.


>But looking out the window, I can say American society has soundly rejected the rail as the ultimate solution.

“Soundly”? If you mean in the way a child “soundly” rejects vegetables, maybe. But, the American public has never had this conversation. Capitalism and it’s ensuing cultural affects had it for them.


Maybe American developers have rejected the rail, but I don't think American passengers have.


> maybe they want privacy

Every sci-fi movie of the last 50 years has solved that problem: you don't use trains; you put the cars on the rails (by grouping them together into roller-coaster-looking setups.)

> maybe they don’t want to ride on a rail

Same here: cars on rails can, in most conceptions, signal to the rail system to switch them onto an offramp that dumps them back onto a road.

Basically, here, "rails" are serving as safer replacements for highways. Every road that isn't a highway would still be a road. You're just enabling people to, effectively, "have their car take the train", rather than having to leave their car at the train's parking lot.


The roller coaster setup seems so feasible. What are we waiting for?

I guess the obvious problem is that it would require simultaneously modding both private vehicles and public infrastructure at impossible scale.

It seems like such an obvious design end but implementation would require communism :/.


Rails don't work for the last mile. The great thing about cars is that they don't suffer from the last mile problem.

It seems like a much better plan is to separate drivers from other people more rigorously, and maybe mandate people drive the last-mile themselves (with AI help, obviously).


What happens if the 'rail' was actually a specific, demarcated line that autonomous self-driving cars used on specifically selected roads that augmented existing public transportation options? That would alert human drivers/pedestrians/bicyclists to be a bit more careful - similar to the brightly colored green lanes that are starting to be provided for bicyclists.

Similarly for long haul trucking applications - could the far left lane/carpool lane be safely dedicated to autonomous trucks during the hours of midnight-3AM with some bright blinking signs paid for by the autonomous trucking industry trade group?


This. I think this is the correct solution, with the self driving cars refusing to self drive on roads that don't have this infrastructure. I actually wonder how much money it would take to cover, say 95% of roads in the US with some kind of "smart paint" that can be detected in rain, through snow, and when covered in dirt or gravel, vs the amount of money spent on developing the lidar.


Was gonna post almost the same idea. You don't need a true "rail" to keep an autonomous car from mowing over people. You need something that's pretty binary "am I on my pre-approved area or not". A special kind of paint or something very low-tech could be the solution.

There will still be the last-mile problem. Even if the "rail" is just cheap paint, you'll almost definitely never have 100% of roads painted, and paint wears over time or gets buried under snow etc.

Eventually we're looking for ways to add "more 9s" to the reliability and safety. A non-rail rail could add a bunch of 9s to well-trafficked routes.


As a Californian, I predict we'll convert all the HOV lanes to exactly what you've described.


For what it's worth, it doesn't really need to. Feet work well for the last half mile or so, and a well-designed city can get stops this close.


> ... and a well-designed city can...

As someone who lived in West LA for years and saw the new line come in, I'll stop you right there. Cities like LA, Mexico-City, San Jose, etc. are not well designed and trying to re-invent them is going to take one hell of an earthquake.


I used to live at 16th and Santa Monica and was really excited about the Expo Line. I ended up moving before it showed up because it was many years late (I'm glad it finally got built though!)

I used to ride a recumbent to UCLA now and then because I had a death wish, and it shocked me that it took 25 minutes and yet people spent 60 minutes in traffic in their cars every day just to get across the 405.

You're right, though, those cities are mostly lost causes, more for the attitudes present in them than the infrastructure. Doesn't mean we can't try to make any new ones better though.


Oh man, the Expo line isn't synced with the lights, it's horrible. That and the bums like to go crazy and sit on the rail lines, causing jams at the street crossings. It's not that frequent, but it happens. It used to back up to Bundy on a bad day, now it's every day.

Riding a recumbent to UCLA from 16th is hardcore. Everyone I knew that rode in west LA got hit at least once, if not more.


I did indeed get hit. I still have aching ribs some mornings. It's one of the biggest reasons I moved away.


Just ease the regulations on building height and density, and multi-story multi-family buildings will start to appear in good locations through free market capitalism.


> Just ease the regulations on...

My brother lived off Geary for a few years and went to a few public meetings on the proposed subway line they want to put under it. He was interested in the idea, as the N-Judah is a methadone clinic (last he was there a few years ago, caveats apply). He never saw such NIMYism and such tight resistance by the property owners on Geary to any improvements whatsoever.

You don't just ease regulations or really do anything 'big' in any city. Palms must be greased, and if you do not it is a horrific process that takes at least a decade.

I'm not saying that you are wrong about the free market. However, my brother thought that the most fantastic thing about the new Star Trek movies (esp. the new Wrath of Khan) was not the warp-drive, nor the aliens, but the very tall building in SF. Even in 400 years, there is no way the planning boards would allow such things.


The San Francisco Board is subordinate to the laws of the state of California. If California wants to ease regulations against denser urban areas, California can do it.


I mean, you aren't wrong, but you are unrealistic. Something like that will take years just to get to a final vote, and then the lawsuits will tie up the issues for at least 20 years. Say 30 years total, just to get to the point of a permit to build. To get things moving faster, you may need to re-write the CA constitution.


The billions being spent on self-driving cars cost less than the hundreds of billions it would take to convert the USA's cities to be even remotely well-designed.


Except after the billions on self driving cars you still have horribly designed cities.

If there is anything to be nostalgic about the past over, its how civil engineering projects of such scale were a lot more feasible when the complexity of doing them was so much lower.


Yeah, but converting all of our cities to 'well-designed' cities is not really an option at this point. Not only that, but millions and millions of people live in places that aren't cities.


Not even that. Nothing is stopping a small fleet of autonomous miniature vehicles from breaking off from the rail line, carrying people home, and returning. Lower speeds, lower mass, fewer deaths. We're only bound to the legacy car size by accommodating existing cars.


See also: folding bikes. You just described my commute for a few years.


Feet work well in cooperative climates. They do not always work ‘well’ in Boston, Seattle, New York, Chicago, Detroit, etc. Consider not just basic walking, but carrying groceries, children or other items.


I have a 7 month old and walked with her to get groceries in the snow last weekend. Seemed to go fine. Fortunately it was only a 300 meter walk.


Chicago reporting in. I manage to walk to my walk grocer in cold temps and carry things back.


>Feet work well for the last half mile or so

I invite anyone who buys into this sweeping generalization to walk a half mile in Las Vegas, NV; Tempe, AZ; Houston, TX; New Orleans, LA; or Tampa, FL in July or August.


Many cities in Spain, Italy, Greece and Portugal see temperatures just as hot as Houston or Las Vegas. Yet public transport is just as common as in the rest of Europe.

Walking half a mile in 35 C (95 F) isn't an actual problem unless you have a medical condition or you are morbidly obese. For those very rare 40+ C (105+ F) days, throw an official midday siesta, or just shut everything down like for snow days.


>Walking half a mile in 35 C (95 F) isn't an actual problem unless you have a medical condition or you are morbidly obese.

Again with the hand waving as if this was a solved problem. Trust me - if you actually try to walk half a mile on a 90+ degree, high humidity day you will regret it. The issue isn't just infirmity - i.e. those prone to heat stroke or overweight, but one of comfort. If you actually try tooling around in Houston (for example) you'll end up soaked in sweat. It'll be worse if you're dressed for the office in long sleeves and slacks.

That's when you'll discover that more than the public need to adapt to public transportation uber alles. Can I use a shopping trolley for groceries? Sure. Can I cycle to work? No doubt. But what if your employer doesn't have showers available? Mine doesn't. I can't show up drenched in sweat looking like a wet rat and stinking all day of sweat. And that's exactly what'll happen if I ride the few miles I have to work. Even if I take a change of clothes to work I still have no place to shower and neither does anyone else unless they are health club members. There are many externalities you're just hand waving away. As someone who lives in this climate I assure you I'd love to see light rail everywhere but it just won't replace the last mile for people without drastic societal changes.

There's also the fact that bus and rail frequency needs to dramatically increase to entice riders and also accommodate them when they do come. Will people choose a 45 minute bus ride over a 15 minute drive? Particularly when on average you'll spend 7.5-10 minutes baking in the sun or lightly shaded by an un-airconditioned bus stop? Not unless forced to, no.


I'd love to see you deliver a dining room set, complete with a sofa, a love seat, an arm chair, a mattress with a frame, and an 85" TV using your feet.


Bit of a strawman, no? That's obviously a good use case for a van. But I never said we should completely eliminate all roads, did I? I just think the primacy of the automobile is worth questioning - moving people (and goods) is the goal, not moving cars.

I mean, I'd love to see you medevac a gravely wounded person from the middle of Arctic tundra to a hospital 120 miles away on your feet.. except I wouldn't, because we have helicopters for a reason. We don't assume you should have to use a helicopter to get groceries though.


Here in The Netherlands we use bikes. They're great; so much faster than driving for short distances. It helps that housing is for the vast majority of people within 10-15 mins cycle and the land being flat.


I've always wondered: how do disabled people get around in cities designed for bicycle travel? Mostly those powered wheelchairs? If so, how did they get around before the invention of those? (The answer in America has always been 'you put them in a car, and then they can go wherever a car can go.')


You see a lot of those electric wheelchairs. There are also special buses provided by the council. Disabled people get a personal budget for stuff like help in house and a mobility budget. Because, you know, you need to look after the weakest members of society.


Oh man, I live in Melbourne, Australia and riding a bicycle here is terrible. The bike lanes are squeezed between parked cars and driving cars, with the risk of getting doored being really high (should be zero). And if you do get doored, the attitude of the person trying to kill you thinks that it is the cyclists fault.

Self driving cars can't come quickly enough.


I know, happen to live in Groningen, and love cycling as a solution. However, there are some situations where I still like a car. Cycling in winter isn't great, cycling with baggage isn't great either, and cycling in the rain gets wet.

Cycling is great, I hope to get along in life without a car, but cars are still great to have as an option.


Trust me, we know. Your reliance on bikes is a huge inspiration and talking point for US urbanists.

Unfortunately there are features of most US cities (lack of public interest, obesity, disability, low density, long commutes, weather) that block our adoption from approaching yours.


San Francisco used to have rail streetcars all over the city, many more than the 6 lines now.


Roads aren’t free! With the technology available today, I bet a rail based solution could be more economical.


Rails definitely do work well for the last mile in very developed cities (i.e. London), but it's probably an unsustainable infrastructure cost for most places. That being said, it's easy to just change where that last mile is in the future (i.e. mixed zoning, flexible working environments, increasing mega-cities etc...).


I don't think "very developed" and "unsustainable infrastructure" are mutually exclusive. I would argue the manner in which a city developed is far more important than the "degree" of development. Phoenix is a young, developed city but the efficacy of public transportation here is very different than London.


Rail isn't something you can solve solely with tech, though. It requires building new infrastructure, which requires enormous political cooperation, which is an intractable problem in our current political climate. Self-driving cars do require some political cooperation, but it's much less because it purportedly doesn't require new infrastructure.


From my experience, relative to the rest of the first world, American highways and streets are dilapidated and neglected. I don’t see how that doesn’t get worse when self-driving cars start using those streets more efficiently than humans.


How can you expect your experience to be meaningful when road maintenance is the responsibility of local governments? I've traveled a lot and I don't think I have a good understanding of the condition of average US roadways much less those in other countries.


Right. Which is why tech companies are the wrong people to solve transportation. When you only own a hammer, everything is a nail.


I don’t oppose public transport at all and the US has a lot to improve in this area. But e.g. East Asia and Europe have plenty of rails, and people still want to drive their own cars, and there are still an unbelievable number of accidents.

We should require that every traffic participant gets a sensor, and connect all in a (local) network, instead of only relying on machine learning and its vulnerability to the unpredictable situations where accidents occur. Pure autonomous self driving won’t be the final answer here.

Please correct me if I’m not up to date here, but to my last knowledge self driving car machine learning lacks the most important data, that of dangerous situations. I’m surprised that Uber considered it ready for roll out.


Germany here, we don't actually have "plenty" of rails for public transport, we love our Autobahn (no general speed limit!) and our cars, yet our vehicle fatality rates are among the lowest in the world (half that of the US).

The reason that fatalities are 10x higher in Asia is that people drive (and walk!) like lunatics and vehicle fatalities are just shrugged off.

Just be more orderly is basically what I am saying. Also eat more sauerkraut - it's healthy!


I’m not sure who is downvoting this, but there are 1.74 people killed per billion KM on the Autobahn. This compares with 1.16 in the U.K. and 3.38 in the U.S.

Having driven about 2000KM on the Autobahn as a tourist the lane etiquette on the Autobahn is the best I’ve ever seen, and I’ve driven through the U.S. (both East and West coast) and much of Europe. It’s no surprise to me that the Germans can drive on de-restricted roads and have the one of the lowest number of deaths (HIGHWAYS ONLY) in the world.


We can’t even standardize on a format across browsers without severe inconsistencies and bugs, yet you expect vehicles of all types and pedestrians to cooperate in a network? That’s not even taking into account the outliers who refuse to conform (or worse, those who try to hack it).


It’s not me alone who expects that, communication between vehicles has been in discussion for long time: https://www.nytimes.com/interactive/2017/11/09/magazine/tech...

Every car has plenty of communication and localization systems today. Nearly every pedestrian carries an electronic device with potential for exact localization. It makes no sense to not use that data, and solely rely on still-not-perfect computer vision. The optimal system will necessarily be a combination of both.

As for standardisation, maybe realizing what an enormous number of fatal and many more forever life-altering accidents we currently consider normal will finally get people off their backsides. I wish somebody would start a media project revealing how insane the current situation is..


Wanting/needing something to work and it actually working is two entirely different things. Again, refer to my point about browsers. Happy to give a lot more examples if that's what is necessary.


>We Should really just build rails

The beauty of cars vs rails is that cars can go more than just a set in stone angle or path defined by a rail. Cars can go in and out of parking lots and dirt roads. Rails would be more automated, yes, but not as versatile.


Rails empty out into areas that are parking lots, and maybe you don't get the irony of rail but the more of it you have the less parking lots you need.

And as far as dirt roads go, if the self driving cars can't make it through well defined, paved roads what are the odds they are going to accurately navigate some dirt road?


Why not rails for the long distance high speed travel and then low speeds off the rails for that final distance?


A lot of ICE train routes run alongside the Autobahn. The beautiful thing about the Autobahn it’s that it’s legal to match the speed of the train. That said, it’s bad for the environment and cruising at 140mph where it’s safe to do so drinks fuel like nothing else.


I think in reality you will find that this is rather difficult to do, especially when it's one of the high-speed lines where the ICE can go 300 km/h - quite fun to watch the cars go by really rather quickly.


Wherever there is a road, there can be rail.

Autonomous cars aren't a viable tech until the roads are reengineered with rails built in. It's just way too big of a technical problem to solve with computers alone.

Right now the death rate of autonomous cars is far higher than human drivers and it is never going to catch up given that it hasn't advanced technologically recently.


As someone preparing to lay down a small (1/5 scale) railroad - you vastly underestimate the expense, difficulty, and maintenance requirement of rail. Switches and diamonds (intersections) are especially troubleprone. 2% is considered a serious grade. And derailments happen, more frequently with higher speeds and deferred maintenance.

If our rail network was as potholed and badly maintained as our roads, people would die in train accidents every day.

Rails are great for intensive, carefully manicured routes. They are not a substitute for roads.


There seem to be fairly regular (at least, more regular than one would like) AmTrak derailings in the US. That isn't to say I think it is AmTrak's doing specifically, I think it is like you said a maintenance issue coupled with everybody else who isn't on the train interacting with it.

Then there are the cases of engineers who are distracted and what not, but I'd imagine we're talking autonomous so that (hopefully) wouldn't be an issue.


We're not talking actual rails, right? Just some kind of marker / beacon with information (like a RFID for the roads).


I was thinking more superconducting floating maglev with underground power transfer, but whatever floats your boat (rail...)


I would love public transportation like I've experienced it in Europe, but train tracks have exceptionally tight tolerances and high costs to build/maintain vs. roads. I hope Musk will fix this, but probably not before "good enough" self-driving cars arrive. Also, trains get into accidents pretty regularly. First Google hit:

According to the US Department of Transportation, there are about 5,800 train-car crashes each year in the United States, most of which occur at railroad crossings. These accidents cause 600 deaths and injure about 2,300.


My great-grandfather was killed by a streetcar (on rails) on Market Street in downtown San Francisco.


There was an episode of Darkwing Duck (which shared a couple characters and I presume a lot of writers with DuckTales) that serves as an amazing parable for why vehicles are usually better off adapting to the environment than the other way around. The episode is called "Extinct Possibility", and might (or might not) be watchable at the following URL.

https://m.youtube.com/watch?v=uAqZ_jMmGAk


We detached this subthread from https://news.ycombinator.com/item?id=16621507 since it has become its own separate discussion.


in my area, our rail proposal was 1 billion for 9 miles of trail. Good luck putting everything on rails.

Ridesharing algorithms will 100% solve the problem of underutilised mass transit. Self driving isnt necessary.

Imagine shuttles that hold 10 people using uber pool algorithms and funded by the city.

People will be able to make their request and a nearby shuttle will pick all the people up in that area. The shuttles could do ad hoc transfers with other shuttles so people wouldnt even have to be going to the same area.

The algorithm would minimize the average ride time for everyone.

The #1 issue with mass transit is lack of convenience. Rideshare algorithms will eliminate that lack of convenience.


Ride sharing works for young professionals who aren't moving around with kids. I grew up in a public transit city. I then got a car in the US. The car is a strictly better experience, and only marginally more expensive. Ride sharing can reduce the problem, but "100% solve the problem" is off by about 95 percentage points.


> We should just really build rails.

How will that help? The reason that the trains in London and Singapore don't kill people is not that they are on rails, it's because they're underground.


Jawohl! Our Straßenbahnen kill people all the time, because they take part in regular traffic. If you walk in front of one, it can't steer away!

You need to put the trains below ground, or well above!


It would be easier to put a small transmitter in the middle of each lane run on solar power. This would keep the cars centered in the lane except for when they are executing some action.


Due to geofencing, we are building a sort of rails. We're creating cars that are only useful in a limited, well-mapped area.


Most people don't live in a dense enough city to justify rails.


You're right. There's a lot of research and experience that suggests a well-developed rail network would have all sorts of benefits. We could take it a few big steps further and point to other problems like suburban sprawl in America, dying small towns, etc.

But there are a couple of really big, really massive problems. We have trillions in existing infrastructure and trillions more in existing property that's been developed over the course of decades. A lot of that development has come with unintended consequences. Just look at Houston after Hurricane Harvey and how their development and zoning practices took the insane rains Harvey deposited and magnified their negative consequences.[0] Decades of short-sighted decision-making took a hurricane and made it worse. Some changes will likely be made, but they won't solve nor likely keep up with a problem that's only going to get worse. That'll happen because the alternative, demolishing huge portions of the sprawling concrete city, isn't economically viable. The city can't be moved.

It's baggage. And there's so much more. Tens of trillions of dollars worth. Then there are existing roads, bridges, and other infrastructure connecting all of it. Designed for cars, and much of it legacy infrastructure that doesn't take into account modern infrastructure design principles. Add cities and towns, some of which are (charitably) in less-than-optimal positions. Towns, strip malls, and other structures that wouldn't exist if it weren't for suburban sprawl. Baggage. And then there's the people. People with ideas on what their lives ought to be like and where they want to live. There's a town, Centralia, Pennsylvania, that is literally on top of a massive underground fire that's been burning for over half a century. There was a massive relocation effort with federal and state funds. The town has no services, its zip code was discontinued in 2002, and it's still burning with all of the potential health risks that involves. There are still five residents there today, who have fought every attempt to relocate them. They will continue to live there until they die, at which point their homes will finally be taken and demolished through eminent domain.[1] If we can't bulldoze a town like that, we sure as hell can't take a bulldozer to normal ones.

Baggage. What it means is that we can't simply replace everything. It can only be done piecemeal, which means new infrastructure must take into account the need to accommodate everything that was developed under the old. High speed rail can be built out in the United States, and I genuinely hope that it is. But it literally can't replace the car in its entirety.

Self driving car research is--compared to the cost of major infrastructure replacement, let alone an alternative that simply isn't feasible--practically a rounding error. It doesn't hurt us, because the option was never a binary choice: self-driving cars OR rail. Self-driving cars give us the ability to deal within existing legacy constraints as they're slowly adapted, modified, and/or replaced. They also offer some long-term possibilities that will allow us to make radical changes in urban design. And yes, those changes are ones can accommodate rail, pedestrian-friendly city design, and other options that can drastically improve quality of life.

0. https://www.npr.org/2017/11/09/563016223/exploring-why-hurri...

1. https://finance.yahoo.com/news/pa-residents-living-above-min...


It didn't happen over night. The US had more passenger rail in the 1940s than Europe has now! Northern Indiana has the same population density of Scotland, so "too spread out" isn't the issue.

Cities only spread out over the past century. You start building rail again and it will get drawn back in. But the State has to fund that mode of transportation first.


Calling these pre-programmed multi-MJ wheeled computers "self driving" is foolish and deceptive.


It's likely this was actually a bicyclist, not a pedestrian.

The police are saying pedestrian walking outside the crosswalk but here's a picture of the crumpled bicycle: https://twitter.com/daiwaka/status/975767445859287042

You can see it's next to a sign. That sign lets us determine exactly where it happened on Google street view: https://www.google.com/maps/@33.4370667,-111.9430321,3a,75y,...

It's exactly where the vehicle lane crosses the bike line approaching the intersection. Mostly likely the Uber and bike collided when the Uber was crossing into the right turn lane.


The news video in the original post also says bicyclist. Speed limit goes up to 45 mph just before the bridge.


Isn't it reaching to see a picture of a bicycle near a sign and then conclude that the accident must have happened exactly next to or near the sign? Maybe the police/witnesses dragged the bicycle to that location?


Nah. The Uber likely stopped in its tracks and it's in the picture too. You can see an official collecting evidence at the scene. The bicycle may have been pulled onto the sidewalk but it would be the sidewalk near the accident, not far away.


There's no indication that this is the fault of the car or its human driver. Unfortunately, the media won't notice that. She stepped out in front of the car and since she was struck immediately, it sounds like the car had no time to stop or swerve, let alone time for the human driver to react in any way.

EDITed to remove offensive word.


> rabble

Please follow the site guidelines when commenting here. Adding flamebait into a flammable topic sparks explosions, making this place worse for everyone.

https://news.ycombinator.com/newsguidelines.html


Why am I not surprised Uber is the first company to kill someone. With their top notch ethics, ah. This actually makes me really angry, more than I had anticipated.


Have to quote Prof. Teddy here.

Context: How new technology is introduced on public streets and we have no choice or say on whether we want it or not, even if it endangers us and gets us killed.

> When motor vehicles were introduced they appeared to increase man’s freedom. They took no freedom away from the walking man, no one had to have an automobile if he didn’t want one, and anyone who did choose to buy an automobile could travel much faster and farther than a walking man. But the introduction of motorized transport soon changed society in such a way as to restrict greatly man’s freedom of locomotion. When automobiles became numerous, it became necessary to regulate their use extensively. In a car, especially in densely populated areas, one cannot just go where one likes at one’s own pace one’s movement is governed by the flow of traffic and by various traffic laws.

and

> Even the walker’s freedom is now greatly restricted. In the city he continually has to stop to wait for traffic lights that are designed mainly to serve auto traffic. In the country, motor traffic makes it dangerous and unpleasant to walk along the highway. (Note this important point that we have just illustrated with the case of motorized transport: When a new item of technology is introduced as an option that an individual can accept or not as he chooses, it does not necessarily REMAIN optional. In many cases the new technology changes society in such a way that people eventually find themselves FORCED to use it.)

Source: The Mainfesto: INDUSTRIAL SOCIETY AND ITS FUTURE http://www.washingtonpost.com/wp-srv/national/longterm/unabo...


> When a new item of technology is introduced as an option that an individual can accept or not as he chooses, it does not necessarily REMAIN optional.

This is offtopic, but I am reminded of how Slack initially supported IRC gateways and recently discontinued them.

It’s not a percect likeness to this quote’s point. But I think it’s interesting this pattern of initially optional technology, then de facto standardized technology emerges ubiquitously.

I suppose it’s arguable that embrace, extend and extinguish is the purposeful, deliberate manifestation of this pattern, arbitrated by a single entity (i.e. a company with a new technology).

Edit: Jeez an immediate downvote? The comment was barely posted a minute ago :)


This is a great point. Note, though, this is not new.

It's been been happening since the invention of the first tool. Stone weaponry, agriculture, and so forth, all became largely non-optional after their introduction.


The major difference, though, is that most major technologies are not also accompanied by legislation that effectively penalizes the old methods. There's nothing legally penalizing me from using a bow and arrow, or buying hand-woven clothes, etc., whereas walking is effectively criminal in some parts of the US: https://www.cnn.com/2014/07/31/living/florida-mom-arrested-s...


>bow and arrow

Check again. Bow and Arrows are often considered weapons at the same level of firearms, and so many laws apply.


Sure, but they're considered weapons and subject to weapons laws. Firearms are not inherently privileged over a bow and arrow, the way that cars are legally privileged over pedestrians in many cases.


Obligatory Mitchell and Webb skit: https://www.youtube.com/watch?v=-EGAtLGDU7M


It's so refreshing to actually read a truly free thinking and intelligent discussion on the problems of technology and its impacts on society.

Anyway, I just finished reading Kaczynski's first book, "Technological Slavery" (Feral House, 2010) It's an amazing work and it elaborates on this point quite a bit. Recommended.


An “amazing work,” just to be very very clear, by Ted Kaczynski, the serial bomber. Not that he has nothing of value to say, but keep the source well in mind. He’s a smart guy, with a lot of insight, and he’s also batshit crazy and a remorseless killer.


Yes, he is a remorseless killer. No he's not "batshit crazy".

You might also want to check out his most recent work, "Anti-Tech Revolution: Why and How" (2016)

Here's what MIT's student newspaper had to say about the book:

"Anti-Tech Revolution: Why and How is Kaczynski’s well-reasoned, cohesive composition about how revolutionary groups should approach our mercurial future….. I recommend that you read this compelling perspective on how we can frame our struggles in a technological society." -- The Tech, MIT's oldest and largest newspaper

"batshit crazy" haha. A political classification if there ever was one.


Just finished the new Unabomber series on Netflix. This is similar to his manifesto which I ended up sympathising with towards the end.


It's actually a direct quote from his manifesto


Great points and extremely important.

But at the same time, I don't personally want to go back to an age before automobiles became commonplace. They're too useful to me.

Is that selfish? Sure. But our lives are all shaped by our wants, not by our needs.


>But at the same time, I don't personally want to go back to an age before automobiles became commonplace.

Neither would I, but I think the cautionary value in the discussion of the resulting inadvertent restriction of pedestrian freedom is not to visualize an auto-free world as an alternative but instead to visualize a new set of customs of urban development that aggressively protect pedestrian freedom. In the real estate world, the walkability movement is concerned with this and should achieve greater influence.


I don't think that anyone is asking for a return to nature, but America has shown that adopting the automobile as the primary mode of transportation and adapting all of society wholesale for it has serious side effects. Rather than promote it as (essentially) the only option, automobiles should be a tool in a toolkit of transportation methods.


I didn't see how terrible our system is until I moved to an area that is friendly to walkers, well friendlier than the giant pickup truck havens of the Midwest where I lived most of my life.

Someone above mentioned how this type of dependency problem has happened since the first tool -- I don't think so. The problems I deal with daily are drivers who are staring at their smartphone as they turn, or people simply trying to beat the pedestrian across the crosswalk. I've had moments where my life flashed before my eyes more than I can count, and every single time the driver waves their hand at me like it's my fault. I'd like for them to get out of their mobile uterus so we can discuss this like adults, but instead I'm left yelling at their big metal machine, and they're yelling at the interior. And this is in a "walker friendly" area.


I’m pretty sure if Arizona’s lawmakers understood the underlying nature of machine learning and autonomous driving, they wouldn’t allow self driving cars to operate.

I don’t care how good your models and data are, until you’re able to write an algorithm that can fully handle and learn from a situation it was completely unprepared for, it will always encounter edge cases like this.

This is what happens when we have a society of people saying AI is already here, people like Elon Musk saying AI is a “significant threat” despite "AI" being essentially a black box statistical model in it's current form. From what I’ve seen, the general public thinks we’re 20+ years of AI than we really are.

You need general AI before AI can drive a car, full stop. Otherwise, you need to isolate roads from other human drivers pedestrians, and cyclists. It just won’t be reasonably safe until then.


>You need general AI before AI can drive a car

1.3 million people died in the USA from car crashes in 2017. An additional 20-50 million are injured. You don't need to be perfect, you just need to be better than the baseline.

That said, and to be fair to your feelings, Uber certainly is not better. This is a fact I'm sure lawmakers in Arizona are discussing right now.


The video mentions she was on a bike and shows a mangled bicycle. Also the line markings on the street indicate there is a bike lane to the left of a right turn lane. Definitely very bad.

https://www.google.com/maps/@33.4369934,-111.9429875,3a,75y,...


Not to be insensitive to the death. But, that bike lane design is just horrible. Are human drivers really known for respecting the "do not cross solid white line" rules? 9/10 drivers that realize at the last moment they need to turn right are going to quickly cross over this bike lane probably without much time to check their blind spot. If I was cycling on this path, I'd want to use the sidewalk and be extra cautious of peds, I don't trust drivers human or not.


Yes, it's been proven that un-protected bike lanes such as those are very dangerous. There are more and more separators coming out (https://peopleforbikes.org/blog/a-new-generation-of-bike-lan...) to allow for protected bike lanes.

However, I do need to mention that when you choose to bike on the sidewalk, you're creating the same type of problem that cars that ignore traffic rules create for bikes. There's a reason why biking on the sidewalk is illegal in most places, and why bike lanes/roads exist.


Yes, but the reason why biking on the sidewalk is illegal in most places is mostly unrelated to safety.

The only bicycle riders willing to get organized and lobby the legislators are the very most serious ones. These are the people who want to race along at 50 MPH.

The resulting laws force pitifully slow riders to be out in the street. Even if you can only manage 5 MPH, you have to be in the street.

Lots of us are too arthritic, too fat, too weak, too inflexible, too limited by lung capacity, too limited by heart capacity, and too limited by cheap steel mountain bikes from Walmart.

So our choices are:

a. break the law

b. annoy drivers and risk death

c. don't ride a bicycle


>you're creating the same type of problem that cars that ignore traffic rules create for bikes

Which are what? Such a broad statement doesn't seem accurate and the sidewalk shown in the map looks low traffic with excellent sight lines, as long as you're not cycling head down at high speed the danger to pedestrians doesn't seem comparable to danger posed by car driver's ignoring traffic rules.


The problem is that something is somewhere and/or does something unexpected. The bicyclist doesn't expect the car to drive in the bike lane. The car driver turning across the sidewalk doesn't expect the bicyclist to be riding across the driveway on a bicycle. If you're not expecting it you're not going to look for it.

Riding a bicycle on the sidewalk isn't illegal most places because it's dangerous for the pedestrians, it's illegal because it's dangerous for the bicyclist.


I understand and personally am very cautious around driveways especially if they don't have perfectly clear sight lines.

I just find it odd when cyclists out of principal decry riding on the sidewalk when the danger of riding with traffic is well known and America has an abundance of long empty sidewalks in suburbia and around office parks.


Riding on the sidewalk is dangerous despite the sidewalks being usually empty. There's another factor: when sidewalks become intersections. Very few drivers check for cyclists on the sidewalk before going through an intersection. From what I understand this is shown quite clearly in the crash statistics.

Many dedicated cycletracks have similar problems. Here in Austin there are several that I refuse to use because drivers far too frequently turn into my path without looking. I ride in the lane so that I can be seen. It's counterintuitive to most drivers, but visibility is often the deciding factor in bike safety. This would be more obvious to drivers if they spent more time cycling. If a cyclist is stuck using a path with poor visibility, I find paying attention to the cars and using light touches of an air horn when approaching an intersection to help. But I still prefer being visible to that.

More generally, intersections are relatively more dangerous (crashes per person mile or something like that) than straight segments of road for any mode of transportation using the road.


It's not a matter of principal, it's a matter of statistics. Even empty sidewalks are more dangerous to ride on than riding with traffic.


The problem is that pedestrians on a sidewalk do not follow traffic rules. They don't signal turns or stopping in advance. They choose random sides of the sidewalk to walk on. They don't have mirrors to see approaching cyclists behind them.

Vehicles, on the other hand, do follow traffic rules with regards to lane usage, signaling, and checking what's behind them. Even if they don't, they're still more predictable compared to pedestrians.


> you're creating the same type of problem that cars that ignore traffic rules create for bikes

Agree. I'd do it anyways because sidewalks are 99.999% empty where I live. I could ride for 20 miles and not see a person on the sidewalk. So biking on sidewalks is safe as long as you pay attention to what's ahead. No idea if that's the case in this AZ location.


You still have to cross at intersections and driveways. I've been hit multiple times (at low speed) by motor vehicles crossing the sidewalk at a driveway. I don't ride on sidewalks anymore.


This sounds like a speed issue. If you’re doing 20 mph on the sidewalk then yes you weren’t in view when the driver started backing out of their driveway.

I don’t fully think I’m a bike I get right of way. If I see a car reversing ahead I move into the street then prepare to full stop but proceed with caution and watch the cars acceleration to see if they see me. If I get the hint they don’t. I stop. It’s scratch on the bumper for them and life for me I am more than glad to yield.


Cyclists here (in Seattle) have a healthy habit of calling out "to your left" when approaching from behind. Things like this need to become more widespread.


I do that regularly on my commute on a mixed-use trail. Half the time the pedestrian looks to their left and their feet follow. The other half have headphones on so they don't hear me anyway.

I might be exaggerating a bit.


Personally as a I cyclist, I don't intend just yet to give up cycling on roads (which are for multi-transport users). After getting munged on a separator, I also don't think they are the be and end all.


There aren't a lot of options when you need to have cars turning on roads with bike lanes. At some point the car lane and the bike lane have to cross paths unless you're going to build a fully grade-separated bike lane.

I suppose you could have a wall that separates the bike lane from the road except for a short area where cars can cross over, so that at least there's only a small space where the bikes & cars can interact, but that introduces a bunch of new problems (cars that can't see bikes behind the wall when they go to cross over, cars that run into the wall, probably even more swerving to get into the lane because your choices are do it RIGHT NOW or you're stuck/hit a wall).


> There aren't a lot of options when you need to have cars turning on roads with bike lanes. At some point the car lane and the bike lane have to cross paths unless you're going to build a fully grade-separated bike lane.

The problem is that the lane markings encourage vehicles to make turns from the non-rightmost or non-leftmost lane. One should not be crossing another lane when making a turn at an intersection. One should merge into the rightmost or left most lane in order to make the corresponding turn.


I think it'd be safer to keep the bikes in the far right even if they go straight. Yes, it means they have to pay attention to cars turning right. They might even need to stop until it's safe. This is a place where "bikes/peds having right of way" doesn't really make sense.


Users to the right have the right of way over users further towards the center of the street. Thus, pedestrians crossing in the crosswalk have priority over bicycles and cars turning right, and bicycles going straight have priority over cars turning right.


I guess in a perfect world having right of way means you can cross the street without looking left or right. I’m more willing to yield as a bike/ped because my risk is higher and I know people are distracted idiots.


Absolutely worth accepting the reality of drivers on phones' as a threat to safety. I'm just trying to explain the rules of the road from which intersection design can be derived.

There are ways to design an intersection so that those rules are followed naturally, and there are ways to miss it. C.f. right turn lanes.


This is the way it works in Copenhagen.


You can do a lot better than the normal US way. This is how the Dutch do it: https://www.youtube.com/watch?v=FlApbxLz6pA


The state of the art is to make all intersections between car and bicycle lanes at 90 degree angles. For an example, see https://bicycledutch.wordpress.com/2018/02/20/a-common-urban....


I would greatly appreciate a barrier of some kind between me and car drivers looking at the cells.

Or just do what Germany has sensibly done and split sidewalks between pedestrians and cyclists (though, cycling enthusiasts are unlikely to enjoy the experience as much)


They have barriers in some places in the US but they're not very common: https://imgur.com/a/gy8mq


FYI, that is actually the Burrard St bridge in Vancouver, Canada: https://www.google.com/maps/@49.2761098,-123.1351267,3a,75y,...


I'm much more a fan of the design that places a buffer (either sidewalk or shrubs) between the CARS and any other type of vehicles. Sidewalks are tolerable as they are often raised above street grade and a vehicle usually has to be actively controlled to overtake such a raised divider.


Encouraging bikes and other personal mobility devices to use the sidewalk pushes the danger to pedastrians. I'd argue there really shouldn't be right turn lanes except in really select circumstances. Cities that have a lot of bikes like Copenhagen don't really have that many turns lanes. And when they do the mixing area from protected lane to unprotected bike lane is pretty clear.


Anec-dataly, I agree. We live at the edge of a residential area where there is a long stretch of road with bike lanes and turns into the residential area every block. Not only do people regularly drive in the bike lanes, the people who do so are typically the most aggressive and inattentive drivers who are trying to angrily speed around the lines of cars who are driving the speed limit and respecting the stop signs. I'm not aware of any recent accidents, but only because cyclists seem to just completely avoid the entire stretch of road.


Those sort of bike lanes teach cyclists that they should under take slower moving traffic - the problem is the majority of deaths in London are due to vehicles turning into undertaking cyclists.


If it's a bike lane, it is not undertaking. It is a lane. You wouldn't blindly turn across another lane of traffic, why do you do it when it's a bike lane?

And the majority of deaths in London is due to turning transporters/trucks. They are purposely built such that the operator sits very high above and can't even see a bike or pedestrian unless it's many meters away, only by looking into one of 10 mirrors (but of course they don't). On a work site nobody moves these things unless there is an outside instructor, but on public streets we've decided to just blame whoever died. Or, if that fails, blame the infrastructure, even though we oppose any other kind of infrastructure.


What the problem is that those sort of cycle lanes conditions cyclists that its always right to undertake in any circumstances - its got to the point now in London that cyclists feel entitled to undertake at speed in stopped traffic.


That's logically impossible. Under/Over taking doesn't mean moving faster than neighboring traffic, it means "merging into slower moving traffic". If the cyclist isn't moving into the car lane, they aren't undertaking.


Not in UK terms "as I recall it from reading the highway code) its over taking slower moving traffic in the lane to your right.


I don't think that's insensitive to the death. It's properly placing (partial) blame on the cause of the accident.

A better design for that intersection would be to keep the bike lane against the sidewalk and the turn lane inside of that, so that they don't cross, and have a "no turn on red" sign, so that the paths of bicyclists and people turning in cars never cross as long as everyone obeys the stoplight and sign.


The cause of the accident was a careless driver (or algorithm) not checking before crossing a lane.

I can do it, why can't they?


Do you have a statistically large enough sample to conclude that they're worse at driving that you are? Most cases of driver inattention don't escalate to a collision, so trying to infer inattention rates from collision rates is pretty noisy.

And even assuming this was simply a case of a bad driver, what policy approach would you suggest to protect the general public against such drivers? Accepting that drivers are fallible and designing our road systems to be robust against that seems a more effective approach than berating those drivers who are particularly unlucky in the consequences of their failures of attentiveness.


From a the perspective of policies such as designing an intersection, placing blame (even if that blame is justified) is utterly pointless. Systems should be designed for the users you have, not for the users you want, especially since in this case it's literally a matter of life and death.

Sure, drivers should check before crossing a lane, but some percentage don't, so it makes sense to minimize lane crosses.


That's actually more dangerous as bikes tend to get cars turning into them when they turn right.

(It's hard to see a bike at speed going straight when a car is turning right.)

Here's an image that illustrates the danger: http://www.sfbike.org/wp-content/uploads/2013/02/Right-Turn....


> (It's hard to see a bike at speed going straight when a car is turning right.)

This is addressed by having a "no turn on red" sign as I suggested: you don't have to see the bicyclist at all because the bicyclist shouldn't be in the intersection at all when you're turning with a green turn arrow.


Why is a bike going at speed when the light is red? Shouldn't the bike also stop at the red light?


The bike isn't going at speed when the light is red (at least, if they're following the law). This is when the green turn arrow should be lit: when bikes aren't moving.


That kind of design only makes right hook collisions more likely. Pushing cyclists closer to the curb, to the periphery of a driver's view, makes them less visible and more likely to be unseen before a turn is made.


...which isn't relevant because bikes shouldn't be in the intersection at all while the turn is being made. The bicyclist going straight has a red light when the turn lane has a green arrow, the turn lane has a red light when the bicyclist going straight has a green.

Perhaps I should have said "wait for turn indicator" instead of "no turn on red", that's a bit clearer.


This is what I see more in Portland and it seems to work better. https://www.google.com/maps/@45.4981473,-122.6395978,3a,75y,...


AZ driving rules language on solid white lines is the same as FL (where I live). They use the words "should not cross", so it's not technically an infraction for a driver to cross a solid white line. I was in an accident once due to someone crossing a solid white while at a light and cutting me off so I'm familiar with the RAW on those lines.


I have been hit by a car in a bike lane by a car making a rogue left turn (In the UK). I'm always wary of riding in bike lanes like this now.


That looks completely identical to all bike lanes I've seen in suburban areas. I think bike lanes are just inherently dangerous. Would be interesting to see what a well-designed suburban bike lane looks like.


Unfortunately, this is a design that is pretty common in many of Tempe's major intersections. I pass through a similiar intersection biking home from ASU and have had multiple close calls with human drivers.

Anecdata: The times I have interacted with one of Uber's autonomous vehicles at this intersection, they tend to hit the brakes pretty hard as they are about to enter the turn lane and I am around 10ft behind them. All of my interactions with them, however, have been during the day, and it was unclear if it was the human driver in control.


That bike lane is just a shoulder with bicycle symbols painted in it and it looks terrifying.


The car turn-lane is to the right of it, so at least it's maintained as part of the road. Bike lanes that are truly a shoulder with bike symbols painted on are often unmaintained, and full of pot-holes, debris, and such.


This is both terrifying and normal. The bike lanes in my area are frequently used as turn lanes... by vehicles that don't fit in them - despite being clearly marked as bike lanes.

Human drivers are terrible.


Where I am in California, some bike lanes are required to also be used as right turn lanes by motor vehicles.


In fact, it's all bike lanes in California, unless they are marked otherwise. Drivers are required to move into the rightmost lane in the last 200 feet before turning right, including merging into a bike lane.

There are exceptions, but only if there is signage or road markings. For example, if there are two or more lanes marked for right turns, then obviously you can turn from any such lane.

(A little-known fact about that: most California drivers are aware that you're allowed to turn right on a red light after making a full stop, unless there's a "no right turn on red" sign - but few seem to know that this rule applies to all the lanes marked for right turns, not just the rightmost lane.)

Most bike lane markings change from a solid white line to a dotted line about 200 feet from a corner, to give drivers a hint to merge into the bike lane before turning. But even if the line doesn't turn dotted, drivers are still required to merge into the bike lane unless there is a specific indication otherwise.

However, the majority of drivers seem to be unaware of this rule and turn right from the auto traffic lane, creating the risk of a "right hook" collision, which the law is intended to avoid. I've actually had other drivers honk at me when they were waiting in a line of cars to make an illegal right turn from the auto lane while I made a legal and proper turn from the bike lane.

The San Francisco Bicycle Coalition has an excellent info page about this:

http://www.sfbike.org/news/bike-lanes-and-right-turns/

Laws on this do vary in other states. But at least in California, if you see a driver merging into the bike lane before turning right, it's not because they are a terrible driver, it's because they are following the law and (hopefully) increasing safety for bicyclists.


I realize now that the article I had read was for Oregon, and apparently this is very different depending on the state.

What's more concerning is that my state (Texas) seems to have a lot of vagueness around the laws.

Well, that's confusing to say the least.


That actually makes sense since it encourages all vehicles to use the right most lane to turn right, rather than crossing in front of a straight through lane when making a right turn.


That's everywhere. Without a bridge or tunnel, it's topologically impossible to turn without crossing the bike lane.


I think you've missed the point. I'm talking about merging into the bike line prior to the turn.

If you still don't see the difference, here's one way you can tell. If you are stopped, waiting at the intersection, if you've merged, then you'll be occupying the bike line while you're waiting.


Where I live the law explicitly requires cars to take the bike lane before turning in this situation. It's probably better to separate the blind spot for bicycles check from the look for pedestrians and turn check.


I've actually never seen a bike lane that looks different than that...


Google image search "protected bike lanes".


I mean, I've seen the pictures... Just not in the wild. One of those things that seems like a nice idea, but good luck ever getting anyone to pay for such en mass. Have you seen any of these around Tampa Bay? I've never seen anything except the unprotected margins, excepting trails which are not sharing infrastructure with roads.


I have not, but supposedly Tampa is the pilot for FDOTs first protected bike lanes in the state (4 foot island of concrete protecting the bike lane).

Please support the cause as a local citizen!

http://amp.abcactionnews.com/2064993596/tampa-will-be-home-t...

http://www.bikewalktampabay.org/featured/new-protected-cycle...


When someone actually takes the little money to build one, there is an army of people essentially saying they'd rather have 5 more parking spots than safe infrastructure. Then when somebody is hit, like here, the same people argue you can't blame the driver because look how bad the infrastructure is. You can see it in this very thread with people arguing bikes should fuck off to the sidewalk.


Come to New York City, we've got lots of protected bike lanes. More info, and lots of photos, here: http://www.nyc.gov/html/dot/downloads/pdf/nyc-protected-bike...


Yeah, they do the similar stupid "bike lane" crap here in Indiana as well.

Yeah, like a line is going to protect me from run amok car drivers who are too buy on their cell phones. Or it's going to stop the bus drivers from using them as pickup/dropoff areas, cutting bicyclists off.

Long story short, I bike on the sidewalks. I can be a responsible biker on sidewalks while heeding to pedestrians. My life is more important than to be biking on a vehicle road.


How about a "sharrow", where cyclists drive right-of-center, in hopes of avoiding the "parked car door lane", and encouraging drivers to cross the double-yellow to overtake.

https://en.wikipedia.org/wiki/Shared_lane_marking


The video says bicyclist and shows a bicycle, while the written article says pedestrian and said the car hit "a woman walking"... I'm confused as to why there is a contradiction, but considering the video shot of the bicycle I assume you're correct


This design seems pretty common in the US. Some bike lanes on major streets in LA are like this, except there they also have to contend with buses pulling into stops. Biking on those streets would be terrifying, but I saw people doing it.


Yes, I bike almost daily on Venice Blvd in LA, and every intersection I encounter is setup like the one discussed here, along with buses weaving in and out. A vicious battle flared up last year when the city replaced a lane of traffic on Venice with protected bike lanes (some discussion here https://la.curbed.com/2017/10/25/16528864/road-diets-los-ang...). There seems to be a marked increase in the hostility between cyclists and drivers now.


This is purely an emotional initial reaction to hearing this without knowing most facts, but I just have to vent and for once be part of the outrage culture and make emotionally charged statements before more info comes out.

Uber ruining self driving cars that can save 20k lives each year and improve our economy and get millions out of poverty and improve everyone's quality of life by being short sighted, financial driven, and bringing the worst aspects of Silicon Valley to the extreme.

I am so sorry to everyone at Google and Waymo who have tried so hard to not let anything like this happen by being extremely cautious and safe. Ughhhh I feel so frustrated, this is going to cause so much backlash.

Travis, Anthony and whoever else pushed for this to go ahead without caring about safety is evil. This action doesn't have a direct correlation of evil like directly murdering someone. But these highly intelligent individuals ignored their conscience and the easy deduction that actions to compromise safety, laws, decency for quick profit are immoral. Whether they admit it or not they KNEW that this was a high possibility and refused to take precautions creating this awful culture. 20k lives a year, and all the benifits that will be delayed for years are on their head.

They won't face any consequences, just convince themselves of absolution of guilt and do nothing to remedy the situation, but I have no respect for them.


> 20k lives each year

Road deaths are ~40k each year in the US alone.

I agree about Uber; smells like shit, looks like shit, tastes like shit, you better believe it's shit to the core.


If human driven cars kill 1000 people a year, and autonomous cars kill 100, will it be politically unacceptable to switch to autonomous? I suspect it will be.


A portion of that depends on the funding on either side of the issue.

If the autonomous cars kill 1100 per year, but autonomous car companies fund a bunch of studies saying the benefits outweigh the drawbacks, it'll be politically acceptable.


People think emotionally, not rationally. People have all sorts of greatly distorted views on the risks of various activities.


What I don't get is that if this is a test car, how come the the driver did not take over the helm immediately. The driver in that car should be in full alert as if driving the vehicle himself and jump in as needed. Test should not mean to let loose an autonomous car.


Do you think you could maintain the required state of alertness, focus, and engagement necessary to hit the brakes and turn the wheel after 30 minutes (or whatever) of just sitting passively as the car drove you around?

I don't think the "safety drivers" do much other than provide liability-shifting for the company.


the drivers are employed by the company, doing company work. They may (misleadingly) shift the optics away from the automation, but they don't shift the liability.


Sure they do, when Uber (metaphorically) throws the driver under the bus for failing in their duties as "safety driver". >:-)


Unlike all the other comments discussing charges, this one actually has a good point. Presumably the safety driver is there specifically to prevent this from happening. They'd still have to show that the driver was personally at fault. That would mean showing that the driver was e.g. sleeping/not paying enough attention, or something similar.

At the very least, it might call into question whether safety drivers are actually providing any real safety. Localities may start to crack down on using that kind of excuse for self-driving tests.


A test pseudo-"driver" ought to be doing stuff like orally making notes of the environment and car operation, collecting data the sensors might be missing, and monitoring the car's automated judgement. As a related effect, the human should be ready to emergency brake or swerve.


It would be impossible to stay alert as a safety driver.


Could you please not use allcaps for emphasis in HN comments?

This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.


Updated. Thank you for pointing me towards that as I was currently unaware.


Ok great. I've detached this subthread from https://news.ycombinator.com/item?id=16620865 and marked it off-topic.


could there be some way of emitting a signal on your person so cars never get confused? seems like a pain in the ass, but maybe it would be worth it? cars can know for sure there are people wherever the signal is emitted from? dunno if its feasible, curious


[unconstructive but still..] So the day I could be emprisoned because of an error of coding... Like a segfault on an highway... as finally came :D Is the disclamer "no waranty of anykind" shield us from that kind of thing ? ;) (I code on some mainstream C libraries)


If you don't offer warranties, your user will have to review your library so they can offer warranties.

Someone has to sign that stuff as "production worthy or I go to prison".


you already can? our lives depend on all kinds of software every day. In Canada at least you'd need a P.Eng to work on stuff that directly endanger lives.

https://en.wikipedia.org/wiki/Therac-25


If I'm not mistaken, P.Eng are merely needed the rubber stamping, not to do the grunt work. I doubt they'll go through all the specs, re-run all the simulations, and verify all the parameters.

Also, where is your P.Eng be any useful to justify the learning coefficients of your AI, or to protect about the cosmic ray bit-flip in the hardware your proven correct software is running on ?


Engineering for safety is about testing, not manufacturing quality. Or course you need to build quality in order to pass tests, but perfection is not a goal of any component. Tests measure quality, and failsafes and control layers reduce failure rates.


Thanks for the Therac-25 reference :)


Funny how this story has been pushed off the front page of HN prematurely.


I bet people at Volvo are feeling a bit anxious at the moment for that deal:

https://www.media.volvocars.com/global/en-us/media/pressrele...


Uber: 1 Pedestrians: 0 Game on.


Please don't do this here.


On purpose to protect something "more valuable," or on accident?


There's an old saying around gun owners: "There's no such thing as a firearms accident. Only negligence."

I think the same thing applies to autonomous things.


In NYC the urbanists have a trend of saying "crash" instead of "accident" when referring to traffic collisions. They feel that "accident" gives the driver too much credit in situations where they were completely at fault for not looking, going too fast, running a light, etc.


> In NYC the urbanists have a trend of saying "crash" instead of "accident" when referring to traffic collisions.

Everyone I've known in law enforcement, especially CHP officers, does this by default (though they tend to prefer “collision”) when referring to an event where a car hit something. As one of them explained, “collision” or “crash” is a readily discernable physical fact; “accident” is a conclusion about the mental state of the participants.


Which by the way is idiotic. Last century, people switched from "crash" to "accident" for exactly the same reason -- to shift liability from the machine to the operator. IT's a nonsensical euphemism treadmill.

The idea that an "accident" isn't the drivers false is an absurd premise. Drivers are obligated to be diligent. "Crash" is a neutral term that doesn't assign liability to anyone.


I feel that this sentiment could be applied to almost anything and isn't really meaningful. Almost every accident ever could be attributed to negligence somewhere.


I think the point here is a firearm in particular is designed to project deadly force and should be treated as such. If someone "accidentally" shoots someone, it means they were negligent in their treatment of the instrument in their hand.

Autonomous vehicles are expected to be able to safely navigate a large variety of streets and highways, and possibly trails and countryside in the future, without killing anybody else around or killing the people inside, nor destroying the fixtures and buildings that make up your typical city street. The only problem is cars are inherently deadly, go look up the number of people that have died in automobile "accidents" since the automobile was invented, total it up for just one country. They need to be treated with and afforded the same respect as any other instrument intended to project deadly force, because they do, as do whatever algorithms are controlling the instrument.


I try pretty hard to refer to "vehicle collisions" and things like that instead of "car accidents".

So yeah.


Agreed; more generally, there's a movement to avoid calling these incidents "crashes" instead of "accidents", since it's arguable that crashes are avoidable: https://www.crashnotaccident.com/


Accidents are avoidable. Legislating euphemisms is feelgood nonsense.


This is a dumb saying that ignores the fact that humans are fallible beings who make mistakes.

Sure, you might shoot yourself because you're just careless with your gun. Or you might be also thinking of an argument you had with your spouse, or you might be tired when you're handling your everyday carry on your way to work, or you might get startled, or a million and one other things that might prevent you from singly focusing your attention the gun you're handling.

Machines exist to serve us, not the other way around. If a product doesn't take human habits and flaws into account, it's a bad product, because over a large enough sample size someone will make a mistake.


> This is a dumb saying that ignores the fact that humans are fallible beings who make mistakes.

It's not, because the basic rules of gun handling are designed in such a way that failing to obey one of them will not result in injury. They are:

* Never point a gun at anything you don't wish to destroy * Keep your finger off the trigger until you're ready to fire * Always assume the gun is loaded * Know your target and what's beyond it.

> Machines exist to serve us, not the other way around. If a product doesn't take human habits and flaws into account, it's a bad product, because over a large enough sample size someone will make a mistake.

Modern firearms are absolutely designed to this criteria - it takes a willful act to discharge them. If someone "mistakenly" fires a gun and hits someone, it requires multiple failures and clearly rises to the point of negligence.


As long as you follow the basic rules of car safety, you won't crash. Stay within the speed limit, be aware of your surroundings, keep a safe distance from the car in front of you, signal 100 feet before a turn. Unless someone else doesn't follow the basic rules of car safety!

You know what system we've come up with to improve car safety that has only one rule to remember? Always wear your seatbelt.

You know what we've come up with that requires zero effort from the driver? Laminated glass. Crumple zones. Airbags. Automatic braking.

This isn't about whether people use a product negligently — put it in enough hands and it will happen. The point is to design products such that the consequence of negligence is not injury or death.


I agree completely. Here, there's perhaps negligence of the system, and the driver that was supposed to be overriding it in situations like this.


Yeah, it will be interesting to see if this was an intentional (algorithmic) response or a failure of sensors somewhere.

I go back and forth with thinking the algorithms in things like self-driving cars should be public. Anything which must make a choice such as smash into bus or smash into bicycle — the public should be aware of the choices the company chooses to implement. Obviously this example is simplistic but it illustrates my point.

Today I’m definitely leaning towards it should be public and scrutinized. Next month I may be worried this would hamper progress.


I'm pretty sure everyone involved and observing also vacillate between those two mindsets. Regardless, events like this will likely hamper progress.


Forgive me, I don't understand what you mean. Please elaborate.


I suspect they mean the trolley car problem: "you are either going to crash into a group of school kids, or into a group of nuns. Who do you kill?"

I.e. did the uber car kill this 1 pedestrian, because it avoided killing 2 other people? Did the uber car opt for the "least killing" in this siutation?

I think this whole trolley car question - while a nice philosophical question - is silly though. I dont think humans would do any better in an emergency situation. In that split second when you suddenly find yourself in the "oh sh-<IMPACT>" situation, do you have time to a) make a deliberate, reasoned & fully-informed decision, and then b) control the vehicle effectively to follow through on that decision? I doubt it. If you had a second or two to think it through and then deliberately make a choice and steer towards that crowd of nuns, you'd probably be able to entirely avoid the accident anyway.

More likely is you'd probably do what I am sure most people do which is gasp, stamp on the brakes, shut their eyes and hope for the best ... assuming you even had time to realise there was about to be a crash before it happened. Some people might swerve, but I suspect they do that instinctively to avoid something in their way, not as a decision to hit something else.

I just dont think self-driving cars will get themselves into situations where they have to chose who to kill in the first place. And even if they did get into those situations, I really, really doubt it would be due their actions, and I'd certainly trust it to avoid the crash in the first place a whole lot more than the average human driver in the same scenario.

Some could argue "Ah but yes computers are so fast that they CAN make that informed decision about who to kill in milliseconds! So they have to make a choice! The question stands!". I'd argue back that in those milliseconds before the crash was inevitable, they'd act to prevent the accident before a human would even know what the hell was going on anyway. After that it just starts getting into a game of who can conjure up the most ludicrous hypothetical situation that rarely - if ever - happens in real life.


The most obvious example I've seen is when the car (or indeed driver) has the choice to endanger the passenger or a pedestrian.

For example, if a kid runs out into the road in front of the car, it can drive into the kid, likely with no injuries to the passengers. Or it can swerve and drive off of the road, or into another lane, likely saving the kid but putting the passengers at much greater risk.

As you say, in this or any similar situation the human reaction is probably going to be to instinctively brake or swerve without a chance to consider options or consequences. I feel like a computer will have a few cycles to spare to make decisions like that.


One of the most oft-discussed questions around self-driving cars is what should happen if your car can choose between two options: 1) An action that will have a better chance of protecting the people inside the car, 2) An action that will protect people outside of the car, with a whole bunch of variations.

For example - What if someone steps out into the road and you can either hit them, or run into a wall? The former makes it far more likely the occupants of the car will be safe, the latter saves the life of a pedestrian at the expense of possible injury to people in the car.

It is kind of an interesting question, because people can't generally react quickly and rationally enough in a situation like that for it to matter, but in theory, a computer can. But it also dominates the discussion in a way that likely far overstates how frequently this will actually be a decision a computer has to make.

I think that is what the person you're replying to is asking. Given that this occurred at night and on (very likely) uncrowded streets, I feel like its far more likely the car (and backup driver) just somehow missed the pedestrian.


They're basically asking if the car avoided doing something that would have been more destructive (e.g not hitting 2 people but only hitting one person instead). Or if there was no conflict decision making that happened and the car accidentally ran into the person and it resulted in a fatality.

- AI decides between one life lost as a better potential outcome

- AI made no decision and just happened to hurt someone


I think it's a reference to the imaginary trolley problem, which is often raised as an objection to self-driving capability.

The idea is that making a decision between killing one person vs killing something more valuable than one person (eg. multiple people) is something that drivers do on a regular basis. Computers may not be capable of correctly evaluating the ethics of that decision.


I believe they are asking whether that the vehicle struck pedestrian A in the course of avoiding something else like a pedestrian B.


To maybe make it maybe better: Pedestrian B and C.


Please don't, he's going to launch into some philosophical dilemma that's not relevant because these cars aren't making decisions.


No, I'm not going to do that. I just didnt know what they meant.


Not to be the guy who says "I TOLD YOU SO" but it had to happen at some point, sadly & unfortunately - if this is the first (known) case of A.I. controlled cars killing humans, the number of casualties can only grow from now onwards


> if this is the first (known) case of A.I. controlled cars killing humans, the number of casualties can only grow from now onwards

This is how counting works.


Every year between 30000 and 40000 Americans are killed in car accidents. With more self-driving cars on the road it comes as no surprise that self-driving cars will cause some of them.


My sympathies to the victim, but this post with the XKCD substitution plugin made my sides leave orbit.


Please don't do this here.


She stepped out in front of the car in the middle of the road, it's not like a human driver would have done any better.


Interesting nobody used the words jail or murder yet. Not even manslaughter.

Just because there was no driver does not keep it from being a crime. Do the engineers go to jail or do the managers or the investors?


In a typical crash, you don't jump straight to "this was murder/manslaughter"/"who is going to jail". At a bare minimum, you have to establish fault. I don't see why this case would be any different.


To play Devil's advocate, there were over 100 people killed in the US TODAY from non-autonomous vehicles [1].

Obviously we'd need to know the number of miles driven in each category to get a meaningful comparison, but let's all keep in mind a USA where only one person per day is killed on the roads would be a two order of magnitude improvement over today.

[1] https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...


If you look elsewhere in the comments, you'll see some numbers. Autonomous vehicles are new enough that they come out as more dangerous even with one death.


Given the number of fatal pedestrian hits by human drivers, I'd be interested to see how Uber's self driving cars stack up against the apes.

Having said that, one concern one has with self-driving cars is that one can't "reach an understanding" with them the same way pedestrians and drivers regularly negotiate how they're going to behave with each other.

If one doesn't know one is dealing with a self-driving car, one might consider certain situations safe, because they usually would be, but the car negotiates and notices things differently than a human.


Look, I'm all for taking self-driving cars seriously. I've written about how they're effectively WMDs if the software update computer gets owned. I've also frequently commented about the unethical practices that Uber has taken in the past.

That being said, let's not over-react. This is a single death. These are going to happen. Right now self-driving cars are essentially teenagers learning how to drive. En mass there is going to be a death here or there, but, also en mass, we're going to have a permanently safer road once they've learned, if we can secure them from cyber attack.

Try to avoid all-or-nothing thinking here. Definitely advocate for regulations, and maybe Uber wasn't safe enough, but people are either going to die from self-driving cars or they're going to die from self-driving cars taking too long because of public outrage.


This didn't need to happen. It's indefensible that it did.

And there's still no proof that self-driving cars are safer than humans when confronted with novel situations, i.e., the situations most likely to result in injuries and fatalities. If anything, the evidence thus far is that self-driving cars are more dangerous than ones driving by humans.


How are we to judge that self-driving cars are more dangerous from simply one death?

The proper comparison to be made should be between miles driven from autonomous vehicles vs. those of humans and then looking at the incident rates.


The research I've seen is that for areas with good conditions year round they're currently safer than a subset of the population (those in at least one automotive accident in the past 5 years, or something like that).

Still not better than your average driver, but even if we start by replacing alcoholics and routinely poor drivers the net impact on safety goes up.


The title here should be amended to “...hits, kills jaywalking pedestrian...”. Let’s get the facts straight. This person was walking outside the crosswalk. The fact that a computer was behind the wheel is irrelevant. A human broke a law designed to prevent this exact scenario, and the result was both predictable and tragic. It’s terrible, but ultimately the person’s fault.


It's "predictable" that jaywalking will lead to death? That a killed/injured pedestrian was jaywalking can be used in defending the motorist in court. But the vehicle code and road infrastructure does not assume that `jaywalking == death sentence`. Drivers deal with jaywalking every single day -- if death was to be expected, driving and jaywalking laws and regulations would be much, much different than they are now.

The fact that a computer was behind the wheel is the most important factor in this discussion, because the argument for self-driving vehicles is that they, unlike humans, can react to abnormal situations such as jaywalking.


It’s predictable that when you (illegally) put yourself in harm’s way, you might be harmed or even killed. This was a terrible outcome, but it’s no more the fault of the computer that was driving the car than it would have been if it were a human driver. If I go walking into traffic, as a logical person, I fully expect to be hit and injured or killed. The most important laws where jaywalking is concerned are not traffic laws, but rather the laws of physics.

Thankfully this is not the outcome of each and every jaywalking incident, but it is a possibility with all of them. That’s the risk you consciously accept when you walk in front of moving vehicles that weigh thousands of pounds.


The law is that pedestrians always have right of way. Jaywalking is a fine, but not a jail sentence. Not coming to a full stop for jaywalking pedestrians, however, is how I failed my first driver's test.


The law is that pedestrians always have right of way

That actually depends on the state. “Arizona law requires pedestrians to cross the street at corners and/or designated crosswalks...[if] it seems unlikely that a driver should have avoided---as in a case where the pedestrian darts right out into traffic, then it may be that the pedestrian will be deemed 100% at fault for his/her injuries.” [1]

I’m sure that Uber will be sued over this, and they’ll probably settle quickly and quietly to avoid mass hysteria resulting from misleading headlines such as this one. But that doesn’t mean the jaywalking pedestrian isn’t at fault. Walk in traffic, die in traffic. It’s a pretty straightforward physics problem.

[1] http://www.zacharlawblog.com/2015/11/whos-at-fault-in-an-acc...


The blog you quote also notes that a driver may still be found at fault:

> Arizona law also requires drivers of motor vehicles to pay attention and to drive "reasonably" for all traffic conditions. "All traffic conditions" includes the existence of pedestrians in or near the roadways.

> So, should an accident occur between a jaywalker and a car---if shown that the driver could have/should have seen the person and could have/should have been able to avoid, then without question the driver can be held responsible.

We don't know enough either way to cast judgment on fault yet. But to your original point, about the headline being misleading, we don't know enough to know whether the victim was jaywalking, only that preliminary reports seem to suggest she was engaging in that behavior. But labeling her a "jaywalker" strongly implies that she was cited as such, which of course hasn't happened in this case. Calling her a "pedestrian" is a fair choice at this point, not a "misleading" one.

edit: the headline for this story has since been changed from "Self-driving Uber car hits, kills pedestrian in Arizona" to "Self-driving Uber car kills Arizona woman crossing street", which seems fair to me.


[flagged]


Whoa settle down there with your judgment calls. You've made a lot of presumptions given that you know as little as everyone here does. The fact is that as far as we can tell, the only witness is the Uber driver. No investigation has been completed yet, certainly none that justifies you being able to say "she decided to walk in front of a moving automobile". Saying it is "ultimately her fault" is nonsensical. The very blog post you use as an authority on jaywalking says, as I noted, that the driver may still be at fault in Arizona. So where do you get off, with what little you know, in saying that it was the victim's fault?


The article says that she was crossing outside of a crosswalk. Assuming that the author of the article is not lying, the case is closed.

Let’s compromise. The headline should read “Alleged jaywalker dies from Injuries After Being Hit By Self Driving Uber”. Better?


No, that's not how American law works. Despite the protections of the press in the First Amendment, the author of a breaking news article is not considered the sole arbiter of truth. The linked-to article sources the claim to the police:

https://www.reuters.com/article/us-autos-selfdriving-uber/se...

> Elaine Herzberg, 49, was walking outside the crosswalk on a four-lane road in the Phoenix suburb of Tempe about 10 p.m. MST Sunday (0400 GMT Monday) when she was struck by the Uber vehicle, police said. The car was in autonomous mode with an operator behind the wheel.

Should go without saying that the police are also not the sole arbiters of truth, nor are they the authority that ultimately decides who is at fault.

edit: re your edit that the headline should read: “Alleged jaywalker dies from Injuries After Being Hit By Self Driving Uber”. Alleged by whom? No official police statement has called her such, even as they've described her as walking outside the crosswalk:

https://www.citylab.com/transportation/2018/03/first-pedestr...

It may be later that she is determined to be a jaywalker. But this is a breaking news article and it appears the reporters have gone with what the broadest description.


We’ll have to agree to disagree. The available facts say that she was jaywalking. You want to argue with that, apparently just to argue. So that’s OK, enjoy your argument.


> “They are going to attempt to try to find who was possibly at fault and how we can better be safe, whether it’s pedestrians or whether it’s the vehicle itself.”

Autonomous vehicles should automatically be 100% liable in every accident in which a person is injured, and it should be paid for by a levy on every vehicle.

A levy is fair because the per vehicle risk is much more uniform than with a driver behind the wheel.

100% liability is fair because the software should be written so that the vehicle predicts the path of all objects within its range and adjusts is speed/trajectory so that there is negligible risk of collision given the worst case behaviour of the object. If an object doesn't show on the car's sensors, then its still the car's fault as the sensors should be built to they do pick up all objects. There is a complete power imbalance otherwise: software cannot be injured in a collision but people can, so all people should be 100% covered to give the software incentive to be the best it can.

Summary: a car is governed by predictable physics, hence there is no such thing as an accident involving an autonomous vehicle, only mistakes.


Given current road and vehicle design, that's an incredibly impractical goal. There simply isn't enough separation between pedestrians and roadways; to keep injury risk negligible you'd also have to keep speed incredibly low (way below speed limits) on all but the best limited access highways.

If the vehicle can only crawl at 5MPH because you never know when a pedestrian might just dive in front of you... that's just impractical for real vehicle operation.

It's one thing to hit the brakes because somebody decided to jaywalk without looking; another to expect there is always enough space to stop in those situations.


I disagree. Pedestrian density is generally low enough that the car could generally be doing 60km/h and not hit a "worst case" pedestrian.

For example, how many people do you see walking along freeways? If a person is on a freeway then cars should be slowing down for them. It's a rare enough event that cars could be programmed to avoid all collisions with minimal impact on average speeds.

If a car is around people then it should be going at a low speed. If there are enough people around that it can't get speed up, then that car is effectively in a "shared" or "local traffic" area and should be going slowly.

I'd argue that the effect on average speeds will be similar to having low speed limits on residential streets:a slower period at journey beginning/end but the majority of the journey would be at speed on sparsely populated arterial roads.

Here's another argument: intrinsically safe programming might even increase average speeds? Why do we even need traffic lights and pedestrian crossings with autonomous vehicles? If vehicles were 100% safe, we could do away with dedicated crossings, and there would be no need for vehicles to waste time at red lights, merely slowing down on the relatively rare times when it is necessary.


The problem is likely a failed ABS Brakes and on all cars. When the brake is applied and then it fails, there is less than 5 seconds to a crash and likely fatality. No time to react. Experiencing the brake failure on two cars, I have tried to get the attention of NHTSA for a decade. NHTSA has a lock on faulty research to blame an innocent driver for all accidents instead of finding the correct root cause of the crash. My estimate is 50% to 70% of all fatal crashes is caused by a defective ABS Brake.. Finally with no driver to blame, NHTSA will have to solve the correct fail mode for these crashes. The Congressional Sudden Acceleration investigation found nothing to make the roads safer and Toyotas have SA crashes today and the certified accident investigator only logs items to blame the innocent driver. Unless the braking technology is totally changed in self driving cars, I expect daily crashes with no one to blame. I have sent Elon Musk letters that the Tesla has ABS Brake issues causing crashes, but their engineers join in blaming the driver and do not fix the problem Accident investigators are clueless Lawyers and DAs are clueless News Reporters are clueless NHTSA is clueless NTSB is clueless Carnage on the roads goes on unabated and we convict an innocent driver.


On the bright side, with an autonomous vehicle, an accident is a learning experience that will benefit other autonomous vehicles.


Striking a pedestrian in a crosswalk is horrible and terrible news. The reality though is how many pedestrians are hit by manual driver's vs automous drivers? On a percentage basis I gotta believe autonomous cars are orders of magnitude safer. Self driving cars aren't going to be perfect. How many people lost their lives in early factories to machines during the industral revolution? Imagine if they had pulled the plugs back then.


> On a percentage basis I gotta believe autonomous cars are orders of magnitude safer.

Why?

NHTSA reports 1.15 fatalities per 100 million vehicle miles travelled in 2015.

Waymo advertises >5 million road miles travelled. Let's say Waymo + Uber have driven 10 million vehicle miles and killed 1 person. That makes them 10 times more dangerous than human driven cars.

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...


This is the important statistic. Evidence so far points to autonomous drivers being worse than human drivers. In which they definitely have no place on the streets without a lot more R&D.


what about tesla though? by november 2016 they've accumulated 1.3 billion self droven miles [1]. autopilot was released in september 2014, so that's 2 years, or 600 million miles per year. and that was ramping up during those years, so i think it's safe to assume at least 1 billion miles / year these days.

2 deaths in over 2 billion miles is roughly 10 times safer than human drivers.

[1] https://electrek.co/2016/11/13/tesla-autopilot-billion-miles...


I don't think that is a fair comparison/calculation, autopilot is not "autonomous" (or not as autonomous as the Uber experimental car involved in this crash), and the demography of Tesla drivers is most probably much off "generic" car drivers, and driven miles, which include any kind of car (in the sense of including older cars in bad maintenance state or simply with inferior braking and steering systems when compared to brand new cars) and any kind of drivers (in the sense of drivers at a higher risk of accident).


Absolutely true, but keep in mind as well that each of these companies is running a completely different tech stack, for different use cases. They can't be lumped together for comparison.


Autopilot is not self-driving. The autopilot mode disengages if your hands are not on the wheel. Furthermore, users are advised to only use it in relatively safe situations.


>The reality though is how many pedestrians are hit by manual driver's vs automous drivers? On a percentage basis I gotta believe autonomous cars are orders of magnitude safer.

That might be a fair question, except that one death at this early stage of limited realistic trials makes me doubt the correctness of your belief. On the statistics that we have right now, I'm not seeing how autonomous cars are safer.

>Self driving cars aren't going to be perfect.

Shouldn't self-driving cars be held to a better standard rather than compared generously to objectively terrible existing standards? That we think of pedestrians regularly killed in "accidents" as acceptable collateral is already pretty horrific.


Our society has evolved quite a bit from the times of the Industrial Revolution. It’s possible for progress to be made while having life-saving regulations.


It appears the woman was not using the crosswalk, but really that doesn't make much difference in the reaction to the report. Uber's self driving tech seems a bit under-baked at the moment.


I always take the "outside the crosswalk" reports with a grain of salt. Hereabouts, if the pedestrian lands outside the crosswalk, they're assumed to have been crossing outside of it, even if later eyewitness reports and video footage show they were in the crosswalk and the car knocked them out of it.


There is also the problem that in some areas (e.g. California) all intersections are crosswalks even if unlabeled. It's likely that most pedestrians cross in a crosswalk whether they know it or not.

Arizona has something similar:

> By legal definition, there are three or more crosswalks at every intersection whether marked or unmarked.

Source: ADOT Traffic Engineering Guidelines and Processes, Section 910.1


I don’t know the number of hours their system has been driving but I highly doubt it is enough to draw comparisons to the average Arizona driver who hits 0 pedestrians. I also don’t think this works like this in the court of public opinion anyways. This system can never make a mistake of this magnitude and be publically accepted.


Doesn't mitigate the tragedy or implications for autonomy, but per the article, the pedestrian was outside the crosswalk.


Does it really need to be said that an autonomous vehicle still needs to manage to not kill people regardless of their position relative to any nearby crosswalks?

Today you can cross in the middle of the street and take a calculated risk that the people approaching down the street will stop or slow down enough to not kill you. If they go ahead and kill you anyway, they'll still be prosecuted because their attention should be on the road, and the most basic assumption behind their driver's license is that this person is competent enough and fit enough to drive what amounts to a deadly weapon without killing the people around them. When you violate that assumption, the fault is probably on you. Obviously there are exceptions, such as when somebody intentionally jumps in front of your vehicle, but crossing outside of the crosswalk to get across the street is not even close to the same level as trying to commit suicide and fault will be found accordingly.


> Does it really need to be said that an autonomous vehicle still needs to manage to not kill people regardless of their position relative to any nearby crosswalks?

If you require 100% impossibility of killing anybody regardless of what the vehicle and the person is doing - it is achievable only by making those vehicles nearly useless - such as lowering their max speed to something 10 mph (maybe even lower since it's still possible to push a person who will slip, hit their head on the pavement and die). If the vehicle is moving fast and somebody jumps onto a street, there are physical limitation of what can be done. So, if this technology is to exist there will always be a space where accidents can - and eventually will - happen.


I think the comment you are replying to was just clarifying, since the parent to their post implied wrongly that the person was in a crosswalk.

But yeah. We all agree it's not cool to run down pedestrians or bikers or pedestrians walking bikes etc. no matter where they may be.


If that's something the software can't handle well then it's going to be a huge problem in many parts of the world. In many places in Europe, Africa and Asia for instance pedestrians will cross anywhere and everywhere at any time.


The problem is that you 'believe' it to be orders of magnitude safer.

How about we test it properly before allowing it on public roads?


How do you test it not on a public road?


yes, but with a human driver you have someone to take responsibility / punish for, what do you do with an alogrithm to provide justice?


>What do you do with an alogrithm to provide justice?

Assuming Arizona hasn't produced law specifically for fatalities in their public road self driving test program... The algorithms are just one detail of the total system design, and the design of the system isn't really the issue. People don't get killed by designs or code.

The verification of it's safety, and the decision to deploy that system on uncontrolled public streets will be the issue. People made these decisions, not an algorithm.


Fine/sue the entity that made the algorithm, just like every other situation like that. Or, if you can prove negligence on the part of someone inside that company, prosecute them.


What other situations did you have in mind?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: