Given that often human-driven, or human-parked, cars create similar temporary obstacles, the most important question here is: does this happen more-often, or for longer-periods, with autonomous cars?
I don't see the article, or quoted sources, even trying to make that comparison - so this is really only a half-story, compared to what's relevant.
Further, given the remote-guidance possibilities with autonomous cars, it's plausible to think they'll eventually be far, far better than human-driven cars at making-way for higher-priority traffic.
Human drivers sometimes fail-to-notice sirens or other high-priority demands on road capacity. But, an automated system could broadcast the planned-routes of dispatched priority vehicles to every autonomous car in the city, allowing the autonomous cars to preemptively clear paths, before it even becomes an issue of local-reasoning about an exceptional-situation.
The problem is that there are literally uncountably many situations that a human with "general intelligence" will understand and react to accordingly. Sometimes smoothly, sometimes less so. But a non-conscious automatic entity needs to have the required behavior programmed in explicitly.
So yes, you might argue that for this particular situation, you "just" need to put in the proper programming and AI/ML training and then "maybe the car will notice more often than a human" as long as the situation is within very specific bounds. At least now that somebody made an article about it.
But it does not change that the autonomous machine does not understand the complex world it is driving in in the slightest, and that, for example, swerving into an unmanned fruit stand without being able to brake is much better than swerving into an unmanned gas pump.
> for example, swerving into an unmanned fruit stand without being able to brake is much better than swerving into an unmanned gas pump.
This is just ridiculous.
If you believe that humans are making rational decisions in split seconds then you are delusional. A swerving, breakless human driven car will hit whatever happens to be where physics takes it. The scared monkey descendant holding on to the controls will as likely do the wrong thing as doing the right thing. Maybe a fighter jet pilot or a rally driver can do better but i wouldn’t count on it.
And besides how did that AV end up swerving with no brake? This is the reason why autonomous vehicles are set up with redundant brake actuation. If I would have doubts about our ability to stop the car I would much sooner implement a third independent brake system than to try to solve whatever philosophical runaway trolley problem you are concoting here.
Runaway trucks are a very real problem, we even have emergency escape ramps in places where they are most common. There is usually no split second decision here: You have a comparable eternity to decide where you want your non-braking truck to end up.
However you construe my particular example to be "ridiculous" under the additional constraints that you imposed on it yourself, you are looking at an instance where an autonomous vehicle failed to make a reasonable decision that a human would right in this article.
In fact, a human, the garbage truck driver, had to react. They were not a "jet pilot" or "rally driver", and yet they were perfectly able to resolve the situation that the AV had gotten themselves into.
> you are looking at an instance where an autonomous vehicle failed to make a reasonable decision that a human would right in this article.
And remote assistance/teleoperation was reacting at the same time, for the vehicle.
Humans will screw up similar cases: end up in the emergency vehicle's way and panic and do the wrong thing. In fact, I saw a fire truck impeded Wednesday because of some judgment mistakes by a human driver that an AV would be unlikely to make.
So, frequency of "doing the wrong thing", and severity (here, probably measured in seconds) are completely reasonable questions to ask-- even if the circumstances where each tends to mess up are different. I don't think it's reasonable to ask that the autonomous vehicle be superior than a typical human in every axis of performance, just overall superior.
Yes. And autonomous trucks are an engineering problem, not an abstract philosophical question. You can ask the question: “how is the autonomous truck going to know if it should slam into the petrol station or into the fruit stand?” And there is no good answer to that. Or you can ask: “How do we engineer autonomous trucks so the probability of a runaway incident is lower than epsilon?” And then suddenly it turns out this is a solveable problem with our existing tools. (With redundant brakes, and with built-in brake health checks.)
> under the additional constraints that you imposed on it yourself
I assume you mean comparing to what a competent human would do? It is implicit in the whole discussion. Human drivers are the current best practice. You are asking about the “fruit stand vs petrol station” question presumably because a human would know to choose the fruit stand.
Nobody is asking this alternate question, because they would immediately feel it is ridiculous: A young boy is crossing the road in front of an AV. In 30 years he will become a politician, will instigate a violent sectarian war which will result in the death of millions of innocents. Should the AV run him over thus preventing all that suffering?
Just to state the obvious: no, the AV should not run the boy over. But why is nobody asking this question? Because it is obviously silly. We as a human can’t look at a young boy and know them as a future mass murderer, therefore we don’t expect this from an autonomous car either.
> you are looking at an instance where an autonomous vehicle failed to make a reasonable decision that a human would right in this article.
Oh yes. And it is a fascinating one. I was reacting to your “fruit stand vs petrol pump” hypothetical not to the article directly.
In the real world someone has to program the self driving system to make the decision about how to react. That is, there is a software team somewhere that is going to have to decide what behaviour to program into the system for the trolley problem. So, your statement that it is not an abstract philosophical question is patently false. Obligatory link to The Good Place making the trolley problem real here: https://www.youtube.com/watch?v=DtRhrfhP5b4
It is hopelessly naive to think that these problems can simply be engineered away. In the real world failures happen in redundant systems. Air brakes are supposed to "fail safe", but in reality a host of factors contribute to accidents: how well a truck or trailer's brakes are maintained, engine state, speed, loading, temperature and grade all combine to make them fail. Trains have multiple braking systems, yet sometimes all 3 fail and a spectacular accident occurs.
In addition to all the traditional mechanical issues, self driving vehicles have tonnes of software failure modes that traditional cars do not. More importantly, those software issues are not well understood at this point in time.
If you want to better understand why software can't be trusted to Do The Right Thing, go back and read investigations analyzing failures of systems that have come before. The Therac-25 is good over here: https://en.wikipedia.org/wiki/Therac-25
No system a human can build can be completely intrinsically safe. Mistakes by designers occur. Safety is a process that takes time and effort, and it will take decades for self driving cars to work out all the bugs.
> If you believe that humans are making rational decisions in split seconds then you are delusional. A swerving, breakless human driven car will hit whatever happens to be where physics takes it.
I was in a car accident a few years ago where someone left a stop sign without realizing that there was traffic (me) in a lane they couldn't see. I wouldn't describe having felt like time slowed down, but that isn't an absurd way to describe it. I had a weird sense of clarity for the few moments before impact. I was able to slam the brakes, but I was way too close to avoid hitting them. I had a distinct feeling of "the front of the car is heavier and there's a person there". I swerved left instead of right to avoid hitting the front and slammed into the rear passenger side of their car. This spun their car completely around and totaled both cars, but both of us were able to walk away without any injuries.
Some really fascinating studies have shown that your perception of time doesn't actually slow down in moments of heightened intensity, however the detail of the memory (when reflected on later) is higher than for a non-traumatic experience.
An interesting summary of how they studied this if you're curious:
> But it does not change that the autonomous machine does not understand the complex world it is driving in in the slightest
It only needs to understand when to ask for help:
the driverless car had correctly yielded to the oncoming fire truck in the opposing lane and contacted the company’s remote assistance workers, who are able to operate vehicles in trouble from afar. According to Cruise, which collects camera and sensor data from its testing vehicles, the fire truck was able to move forward approximately 25 seconds after it first encountered the autonomous vehicle
I don’t understand why people think that driverless cars need to deal with every one in a million scenario, it makes no sense.
This phrase is doing a lot of work here. It's one of my least favorite phrases (in a close race with "just do this") and is often associated with unrealistic feature requests.
I'm sure there is a disagreement between SFFD and Cruise as to exactly what happened, but the article implies that the Cruise vehicle isn't the one that moved to fix the problem.
> The fire truck only passed the blockage when the garbage truck driver ran from their work to move their vehicle.
Even if the Cruise vehicle was able to call for help, the car not only needs to call for help, but also wait for a response (at 4am), and give a remote human enough information to control a car remotely in a safe manner. None of these things are easy... not impossible, but not "only needs" easy.
> driverless cars need to deal with every one in a million scenario
Of course driverless cars need to deal with one in a million scenarios. Human drivers deal with one in a million scenarios every day. Nothing is ever the same when driving, so there are always subtle changes. But even if there is an unusual situation, there must be some kind of response. Even if that response is to move to the side of the road and put on hazard lights to indicate that it doesn't know what to do (which it may have done)... that would have been a better response than to do nothing and sit to wait for a human. There should be a default "unknown input" failure mode. The disagreement here is that SFFD didn't like how the Cruise vehicle failed. Maybe there is a better approach.
We are expecting these vehicles to move us around 24/7. That's a lot of trips. At this rate, one in a million scenarios will happen every day. That's the problem with large numbers -- even rare events are to be expected when N is high enough.
> I don’t understand why people think that driverless cars need to deal with every one in a million scenario, it makes no sense.
That a particular situation is rare, does not mean that you won't encounter multiple rare situations in a given time frame.
(By the way, the article stated in the beginning that the garbage truck had to move? Either way, it required manual intervention and a human's situational awareness. How does the car ask for the proper help at freeway speeds within seconds--not even split seconds?)
"But it does not change that the autonomous machine does not understand the complex world it is driving in in the slightest, and that, for example, swerving into an unmanned fruit stand without being able to brake is much better than swerving into an unmanned gas pump."
It seems it would have a way of prioritizing such things. That doesn't seem particularly complicated, to be honest... weighted decision making is certainly within its capacity. E.g. shopping cart has fewer "avoid hitting" points than stroller.
It's not like every scenario has to be explicitly programmed in, nor does the program need to run some analysis on a detailed backstory to justify that a baby is more valuable than groceries. In effect, somebody -- probably not a programmer either -- just needs to enter some numbers into a spreadsheet.
(yes there is complex programming that allows that to be manifested in the car's decisions, but the idea that programmers are themselves constantly making "moral calls" in the code, rather than the control data, is fiction)
And if it does have such prioritization in its logic, I'd say yeah, it "understands" the world in that respect. Unless you have defined the word "understand" in some mystical way that precludes non-biological machines by definition.
> It seems it would have a way of prioritizing such things.
You are putting the cart before the horse. The problem is not in prioritization, the problem is in having the correct ontology to even get to the "prioritization" stage.
Does the car know what a fruit stand is? Does it know what a gas pump is? Does it know how the fruit stand relates to the gas pump in "expected outcome when being hit by a car"?
If you say "we can program that in", read my post again.
On a level below identifying stop signs and lollipop ladies and push carts, a SDC's stack needs to be able to identify:
1) Driveable areas. If something looks like a cliff maybe don't go there.
2) Fleeting obstacles. Dust blowing in the wind. A stray plastic bag, winging its way northwards to the waiting maw of a baby turtle. A person with a borderline credit score. A stray cat chasing a bug. That sort of thing.
3) Anything else that's physically present in the path of the car. Doesn't matter what it is. Do Not Hit The Thing is the second lesson anyone learns when being taught to drive, after Make It Go So Hit The Thing Is Even An Option.
I would imagine the car has ways of identifying things that are specific hazards, such as a gas pump. Fruit stand is probably categorized as "other."
"the problem is in having the correct ontology to even get to the "prioritization" stage."
That part isn't done by the program, it is done by whoever enters the prioritization numbers. That is, someone, possibly a committee, can dial up the "avoid gas pumps" weighting relative to the "avoid baby stroller" weighting if they are concerned that cars might swerve so widely to avoid coming near a stroller than they are risking hitting a different hazard. Or they can dial up the weight of grocery carts relative to dogs, since children might be in a grocery cart. Etc.
Those are humans, who can do whatever ontological analysis they need when deciding on the the settings. The car doesn't need to access any of that, it just needs a general look up table that can help make optimal decisions based on the human-entered value system.
I mean, you're right; someone making that list might not think to include "centaur", and maybe one Halloween a child is dressed up as one, and the computer vision system interprets the "centaur" as a horse instead of a child, and it makes the wrong decision, but how many centaur-related accidents do you think self-driving cars are going to be involved in each year?
It's completely feasible to imagine writing a list of the top, say, 100 things in the world that a car needs to make morally-significant decisions about, and then deal with every other accident or near-miss after the fact. Interactions with unrecognised objects should be rare enough to be a rounding error when comparing accident rates between autonomous and human-driven miles.
Even if a automated vehicles intelligence misinterpreted a human child to be a horse child, then it should only hit it in the unavoidable circumstance of trying to preserve a human life.
If it’s choosing between hitting a pole and hitting a pony, it should always hit the pole so long as no one is injured.
The real problem is that occasionally these cars today mistake roads for oceans, walls for roads, and people for inanimate poles.
You're asking the car to make moral decisions. Given the choice between hitting a child or and old woman, which will it choose? A bicyclist vs a pedestrian? A lemonade vendor vs a hotdog vendor?
This is all assuming it can distinguish between all of these objects, and that a real person could assign relative moral values to hitting one over the other.
> [...] swerving into an unmanned fruit stand without being able to brake is much better than swerving into an unmanned gas pump.
That's a great example.
Not to mention triggering any Rube Goldberg-machine-like chain reaction (even with just a few steps) where a series of events would need to be predicted.
There are not uncountably many – we live in a finite world governed by understandable physical and civic laws.
There's an actually measurable rate of humans blocking other traffic, and for how long before resolution, and an actually measurable rate of autonomous vehicles blocking other traffic, and for how long before resolved.
If the rate for autonomous vehicles is already below that of humans, or rapidly headed there, that's far more important to note than to theorize about other corner-cases.
(Also, as a San Francisco driver, I have serious doubts about the "general intelligence" of my fellow drivers. I don't see any reasons to hold autonomous cars to a higher standard – perhaps a much higher standard? – than other cars.)
OP was obviously not using that term mathematically (i.e. the cardinality of the power set of natural numbers), and obviously meant something in the neighborhood of "effectively not countable". And, again, in all but the formal mathematical meaning of the word "countable", many things are not countable (e.g. no one will live long enough to count all the natural numbers).
In the real world, assigning an ordinal number to an object/event/thing has a nonzero time cost. And accounting for every situation in software has a much greater cost.
> If the rate for autonomous vehicles is already below that of humans, or rapidly headed there, that's far more important to note than to theorize about other corner-cases.
Okay, but dangerous human drivers get systematically removed from the streets. Are we doing the same for self-driving cars? In this context, does every Tesla count as the same "driver"?
Maybe all Teslas should have their autonomous driving centrally disabled every time one causes an accident, or breaks a law, until that specific issue is fixed. But it would be impossible to run a car company that way, so of course that takes priority over, say, keeping innocents alive, right? /S
While I do agree that even a below average human intelligence can better cope with this kind of situations (also because it has access to many more opportunities to understand, for example we can understand what's going on by seeing at the face and the hand waving and/or yelling if the driver in the other vehicle; humans are quite good at understanding other humans), I don't think it follows that conscience is necessary to achieve that.
Autonomous cars will crash in situations where humans wouldn't, but the opposite is also true. Autonomous cars don't fall asleep, drive drunk, or get distracted on their phones.
Personally I try to stay alert while I drive, so I feel I'm safer driving myself than letting the machine do it. But I'm less confident that the best autonomous cars will have more accidents than humans in general, and they're improving all the time.
People can affix fake sirens to their cars and if they blare them at you you should pull over... after the incident society then throws an incredibly harsh penalty at the offender.
To make sure things run smoothly it benefits society to never doubt or question whether people who say they're police officers actually them - they might have something incredibly important to say (Like, hey, there's an active shootout ahead - please don't keep driving and get yourself injured).
If you as an individual create some funny gag to fake out autonomous vehicles into thinking that you're a cop whether you cheekily do it with "TOTALLY NOT A COP CAR" written on the side to get a laugh or not... you're almost certainly going to be charged with a felony crime.
> needs to have the required behavior programmed in explicitly
This is missing the entire point of ML. ML is literally defined as not having to explicitly program responses in for every situation.
Needing to pull over because a fire truck has told you it's coming your way in 2 minutes is pretty easy compared with some of those other "uncountably many" situations these cars need to deal with.
> the autonomous machine does not understand the complex world it is driving in
Daniel Dennett would like to have a word with you. It's perfectly possible for systems to "understand" things for any useful definition of the word "understand". A calculator absolutely "understands" arithmetic.
> This is missing the entire point of ML. ML is literally defined as not having to explicitly program responses in for every situation.
So how do you know how the system will respond to an arbitrary situation? You could easily argue that we don't know how an arbitrary human will respond to an arbitrary situation, but we have systems in place to deal with the consequences if they handle it badly.
For example, if a driver handles a situation badly enough, they could lose their license. If an autonomous car does something bad enough that a human would have lost their license, what happens? Do all of that company's cars get pulled off the road until the bug is fixed and validated?
> So how do you know how the system will respond to an arbitrary situation?
You put them in that situation and see how they respond. If they respond badly, you keep training them until they respond better. I'm not saying it's easy, but I am saying it's exactly what autonomous-car developers been doing all this time.
> If they respond badly, you keep training them until they respond better.
Right, but what do you do with all the other cars on the road that presumably still have the bad behavior (while the fix is being developed)? Just assume that the situation is rare enough that you'll be able to fix it before it happens again?
You're basically saying test-driven development can find all problems with software, and it's well-known that that isn't the case. It's very dangerous to assume TDD is all that's needed when lives are at stake.
> This is missing the entire point of ML. ML is literally defined as not having to explicitly program responses in for every situation.
I suggest you look up the free lunch theorem of supervised learning.
> A calculator absolutely "understands" arithmetic.
I'm not sure how that's anything else than proving my point. Let's say a calculator "understands" arithmetic. It does not understand anything I would apply those calculations I make with it to. I cannot tell it "calculator, go do my taxes".
Your particular example is not even true: A calculator is able to perform calculations, it does not understand any of the axioms, theorems, and uses around it.
> A calculator is able to perform calculations, it does not understand any of the axioms, theorems, and uses around it.
It understands the axioms because they've been literally built into its tiny brain. It doesn't understand the uses of arithmetic because nobody programmed it to.
Can you provide me a non-vacuous definition of "understand" that doesn't rely on human consciousness being extra-special and magic?
I don't think it's a question of knowing what an axiom is or how a calculator is implemented. I think it's a question of disagreeing on what "understanding" means.
What does it mean to understand something? Obviously (to me and I presume to you) a calculator doesn't understand anything! It doesn't have the capacity for understanding. Obviously (according to, I presume, feoren and Dennett) "understanding" means something very different, and a calculator is perfectly capable of "understanding" arithmetic.
There is no math in a calculator. It’s a pile of logic gates assembled in a way to appear to perform mathematical operations. An ALU has no “understanding” of arithmetic, it’s just a canned, finite set of inputs and outputs. Not an axiom to be found.
The pile of logic gates is an encoding of the axioms. The fact that it evaluates mathematical expressions correctly is both necessary and sufficient to show that it understands arithmetic. Therefore the calculator knows the axioms and understands arithmetic.
Except it doesn’t implement the axioms of math, it implements a crude facsimile of them for a certain subset of numbers, because what it’s really doing is a non-mathematical physical process.
If you want to argue that an ALU is performing “boolean logic” just because it’s made of logic gates, be my guest, but in my opinion that’s a bit like saying a bucket is “doing math” because if you put 5 rocks in and add 7 rocks, it’s smart enough to contain 12 rocks when you’re done.
It is actually "obviously true" unless you believe that human brains have a special metaphysical magic that makes them "more than just a system". That's literally the only alternative: human brains are magic and only humans are ever capable of "understanding". It's a vacuous definition of the word. Systems can understand things, which is good, because the human brain is nothing more.
See: Daniel Dennett's response to Searle's Chinese Room.
I did not claim it's about "metaphysical magic" that machines lack. But I do believe that humans usually have a very rich and multifaceted life outside of driving on a road, compared to cars, and are therefore able to integrate "training data" that cars are not. Unless your plan is to make them an active member of society.
But doing arithmetic and understanding it are different things. The latter is some reflection on the concepts, but the former is just carrying it out. There are many humans that can do arithmetic but don’t understand it
Also I do believe there is a metaphysical difference between a human and a calculator, literally magic
> Given that often human-driven, or human-parked, cars create similar temporary obstacles, the most important question here is: does this happen more-often, or for longer-periods, with autonomous cars?
Honestly the article is pure tech-scare bait. It blames a car that stopped (while it was driving) when a truck was driving at it head on, but tries to glance over the fact that a (human driven) garbage truck is the reason the fire truck had to go into oncoming traffic in the first place.
> a San Francisco Fire Department truck responding to a fire tried to pass a doubled-parked garbage truck by using the opposing lane.
They need to be able to handle messy situations at least as good as human drivers, which isn't necessarily a high bar.
And importantly, unlike human drivers, autonomous cars will only get persistently better at handling these kinds of situations as they encounter more of them.
Yes it is an impossibly high bar, and no they will not persistently get better unlike humans until they develop fully generalized AI.
It's so an easy to just say human drivers aren't as good, but it's not grounded in any reality.
Just take voice controlled assistants for example. We've made pretty big strides on them, haven't we?
And yet an entirely unmotivated teenager who's doing badly at school working a 7-Eleven outperforms voice assistants by several orders of magnitudes, through the sheer wonder of context clues and general intelligence, no matter how dim you want to assert that teenager to be.
I am proud of my voice assistant when it properly understood that I wanted to indeed set a timer for 50 minutes instead of 15 minutes, while any person would immediately understand what I mean even in a noisy environment just from why I'm setting the timer, and even fuzzily adjust their behavior based on what continues to happen.
Anyone who claims automated cars can fully learn the arbitrarily complicated physical situations they have to navigate around is either disingenuous, or does not know how computers work.
> They need to be able to handle messy situations at least as good as human drivers, which isn't necessarily a high bar.
You might have an overly-optimistic opinion of what AI can do. Not to say that humans are not stupid quite often, but we are very far from having AIs that can be as good as a 4 years old child.
AIs are good at what they know and what they can account for. So yes, they will get better at what they know, but they cannot, by design, get better at what they don’t. So there will always be situations where they will be unbelievably stupid because their designers did not think it would happen. There is no solution to that at the moment.
To put it mildly - this is unrealistic in the US except for major cities. Even then, depending on the city and where in the city, its going to be very difficult to convince and implement methods of cutting cars.
While I think it's a good idea I just don't see this happening in the majority of the US other then major cities. In my state I just don't see how it would be possible given people commute 30+ miles regularly.
Start with better zoning laws so that people are not pushed to live 15 miles away from the their work and general/light commerce areas.
After that, just copy the playbook from the Netherlands. Amsterdam was very much like most big American cities until the 70s, their streets were steadily being redesigned for human scale and less car dependency.
It can be done and it can be done easily if Americans stop believing in their Exceptionalism.
Flatlands or not, it does not mean that cars are the only reasonable mode of transportation. Even if you insist of trying to solve a social problem with technology, it would make more sense for those living in hilly areas to push for electric bikes than self-driving cars.
If that’s what the people who live in an area want, that’s a democratic outcome. Democratic outcomes are something that I’m inclined to support absent compelling evidence why they should be thrown out.
Was the majority of people that pushed for zoning laws that discriminated against minorities (it's mentioned in the videos if you don't know what I am talking about), or was it a systemic issue that needs to be corrected?
Do watch the videos, especially the "Strong Towns" series about the issue of how the rich suburbs are actually subsidized by poorer people living in bankrupt city centers. The only "straightforward" path for this to change is by getting younger people to participate a lot more on town hall meetings and to get their city councils on board with change. The status quo benefits and has no interest in doing that.
Yes, why don't we just do the easy thing and fundamentally rework all the physical infrastructure of every locality all at once. Gosh, why didn't we think of that.
It doesn't have to be all at once. The point is that all the money, time and intellectual capital put into the "building FSD cars" are not going to solve the most fundamental problems that exist in car-dependent cities and societies.
1. If fully self-driving cars come along at some point in the recognizable future, I would bet pretty large amounts of money that they'll be extremely popular in Europe or whatever else you're thinking of as not "car-dependent."
2. The amount of money, time, and intellectual capital put into building driverless cars is an infinitesimal drop in the bucket compared to the money, time, and intellectual capital that would be necessary to make a gigantic change in the US physical infrastructure. Particularly time -- there's no way that America is going to change into Europe, well, ever, but particularly not in our lifetimes, so if you want a better transportation system, I wouldn't bet on that.
3. But also, the time, money, and intellectual capital spent on driverless cars is non-rival to any (limited, local) makeovers that American localities are going to get.
1. "Extremely popular" compared to what? Non-autonomous cars? Taxis? Bikes? Buses? Trains? Yeah, I could see people thinking it would be cool to get a self-driving car as a replacement for the occasional taxi, but why would people stop using the existing public transit alternatives that already work?
2 and 3. The American way for city growth is the cause of financial troubles for most cities. And I cited Amsterdam exactly why it works as an example of how you can redesign a city to reduce car-dependency step-wise. Take a look at https://www.youtube.com/playlist?list=PLJp5q-R0lZ0_FCUbeVWK6... to see the economic problems that plague cities in the US, and how a lot of them are improving simply by changing zoning laws.
1. You seem to think that driverless cars are unimportant for places like Europe. I think you're deeply wrong. If cars that can equal or exceed the performance of median humans come to the market without adding more than a few tens of thousands of dollars to the cost of the vehicle, I expect that they will rapidly become more popular than current automobiles in Europe (and Europe is far from car-averse -- the Netherlands has about one car per two people), and that they will on the margin significantly (but not catastrophically) reduce usage of public transport. (EDIT: If the pattern of driverless cars is dominantly one of rides-for-hire rather than ownership, the total unit sales of driverless cars might be significantly lower than current unit sales of normal cars, but I would expect driverless cars to rapidly eat most of the passenger-miles of automobiles plus a significant share of the passenger-miles of public transit).
2 and 3: You didn't even remotely address anything I said.
> The Netherlands has about one car per two people
That is not the relevant metric. The important metric is how many trips does the average dutch take by car that could be taken by some more efficient mode?
Lots of people in European cities have cars but use them only on the weekend. Or they use the car to go to work on the outskirts of the city, but manage to use public transportation when going to a football match for a night out. Lots of families have only one car when in the US it would be only possible to live if each adult had their own car, etc.
> You didn't even remotely address anything I said.
You are right, I didn't. Because to me holding self-driving cars as the panacea that is going to solve the issues in the US is ridiculously naive. It is an illusion created by the corporations that are not interested in actually solving the problem and just want to push out more cars, keep people addicted to broken lifestyles and over-consumption. And is backed by short-sighted technologists that see the world through their nerd lenses and don't stop to think if there are better, lower-tech ways to improve everyone's lives.
Nobody held driverless cars as a "panacea" to anything that's completely something you made up.
Let's just review this whole conversation:
1. I suggested to a poster who wanted to blame the whole Cruise problem on the garbage truck that garbage trucks do in fact have to block lanes.
2. You then jumped in and said that American should "just" reinvent itself so that cars weren't a big deal. You left it implicit why that was relevant here, but I think I understand you to believe that driverless cars are an unimportant technology.
3. I pointed out that "just" changing the fundamental mode of transit across all of America is in fact a tall order.
4. And then you've gotten less and less responsive to actual points until you get to here where you're now suggesting that driverless cars aren't a panacea, which is an argument nobody made.
To affirmatively restate my points:
1. Driverless cars, to be successful, must deal with a lot of messy road conditions, not just the 99% case.
2. If driverless cars do at some point cross the above line and genuinely become equal to or greater than the median human driver, I think they will be an important technology that will improve people's lives -- in both the United States and very significantly (if possibly somewhat less so) in Europe.
3. Driverless cars are plausibly on a much shorter timeline than remaking all of America's physical transportation/density infrastructure, and plausibly much less overall expensive than remaking all of America's physical transportation/density infrastructure.
4. Investment in driverless cars is non-rival to other changes to America's physical transportation/density infrastructure.
The argument that I am making is simple: the more you push for self-driving cars, the more you are pushing to a car-centric system and the farther you are getting from better, more sustainable solutions. It is not just that is not a panacea, it is that it is a bad medicine.
I suspect that you will find that your opinions are just not shared, and that people -- everywhere, including in places that you imagine to be ideologically aligned with you -- generally find that driverless cars make their lives better on the margin, if good driverless cars start to exist.
That is way too low of a bar. All else being equal, even I would say that my life would be marginally better if self-driving cars existed.
The question is whether people value this more than the benefits of not being car-dependent.
You might think that these are not connected, but they pretty much are. Taking that in consideration, I could bet that only a minority would be willing to accept the trade-off.
Nope, that's not the question. You asserted -- all of one post up -- that driverless cars are "bad medicine" that make life worse.
If in fact they improve people's lives, then great. And if people see a path to further improving their lives by, separately, changing other things about infrastructure or density or whatever, then that's fine too. Again these things are non-rival.
They are "bad medicine" if considered as a solution for the public transportation issues in North America, yes. It's not the technology that I am against, it's the fetishism that the tech will be so amazing that it can justify all the shit that people are putting up with now, and not looking at more important problems that if solved could eliminate the need for the tech in the first place
> And if people see a path to further improving their lives by, separately, changing other things about infrastructure or density or whatever
That's just a really twisted way of saying that you don't mind sacrificing the commons if it means you get to play with cool gadgets. It's a lot easier for one individual to go and buy a driverless Tesla and think "oh, at least my car does not stay in a parking lot after my ride is done" than it is to go and promote actual change in urban planning to avoid car dependency in the first place. But because you are (presumably) on the top of the pyramid, you don't actually care about it.
> Again these things are non-rival.
At the first order, it may seem like that. But after we see the interaction of these apparently-orthogonal policies (e.g, zoning laws and public infrastructure) and their inconsistent implementations the problems become very clear. It is hard to argue only with hypotheticals, but I could bet that if driverless cars became a reality tomorrow, cities would be worse in 10 years than they are today.
Thinking that overall quality of life can be improved just by some future technology is already a fetish, if you consider that there are low-tech ways that can achieve better results today. It's this apathy that my original comment wanted to point out, and it was not a personal attack against you.
There is no world in which robotaxis exist that would inhibit robobuses and robobikes from also existing and being integrated into one coherent transportation network. The existing toolkit of pricing controls makes it easy to pull a lever to get more people onto the smaller or shared vehicles as needed, and with roughly linear ability to expand the fleet in any of these axes, the roads they're running on already built and paid for, and the cost of deployment rapidly dropping, nothing will stop them from taking over in a span of 5-10 years.
It'll be the biggest boon to urban transit we've ever seen, because what self-driving actually does is make modality, routing and scheduling less important to individual trip planning.
> the roads they're running on already built and paid for
Roads still require maintenance, and without a profound change in how cities are organized, these costs are always going to be overwhelmingly large compared with the cost of everything else.
> self-driving actually does is make modality, routing and scheduling less important to individual trip planning.
That is the worst type of optimization. It still keeps people in desolate suburbia and makes millions of people across America with the sensation that "commute time" is a constant that can not be avoided, so it should be at least made comfortable.
Investing to make the transportation systems smarter without even looking at how the urban spaces could be changed to get rid of cars is like the (myth) story of NASA putting millions of dollars to make a space pen, instead of using a pencil.
No but there is way less dependence on cars, at least in medium cities and up. Many developments in america just build suburban deserts where there is just more single family homes around you, where you have to drive to the next supermarket, both because it's so far away, and because sometimes there literally isn't a way for pedestrians to walk on. There are a few good videos on the subject:
There are, we are not as dependent on it, and you can bet that the absolute majority of people could live on car-sharing and cab-hailing systems, which would mean that 99% of the "self-driving" problems would go away.
While I agree with you that car dependency is a serious problem in North America and I am also very critical of the obsession with self-driving, I don't think they are necessarily correlated with each other. In fact, one could argue that car-sharing and car-hailing are more amenable to self-driving than a market in which each individual is expected to own a car.
I think self-driving is more established in North America than in Europe due to less strict regulations, more availability of capital and technological investment, and particularly the greater predictability and homogeneity of the car-centric roads when compared to European countries. I'd like to see how a self-driving car handles Rome for example—if it can.
The question that most people seem to forget to ask: what is the societal benefit of having self-driving cars that could not be had by reducing car dependency in the first place?
"Roads will be safer"? Yes, they are safer already on cities with well-functioning public transit.
"Less drunk-driving"? Same idea: if teenagers can live in areas where cars are not a necessity, they won't be behind the wheel and still be able to meet their friends, go to a party, etc.
"Less space needed for parking slots if cars are FSD"? Also true if people are not car-dependent.
To me, self-driving makes sense for trucks on highway roads, not for city traffic. And even then, we could also apply the same idea and think "why not improve the rail infrastructure"?
Most of my car trips are random route. Most public transit (and all rail) runs on a fixed path. It’s rare that that fixed path aligns with my random route. The only trip I regularly take that vaguely aligns with a bus route is from 2 blocks away from my house to the grocery. (The grocery is about 4 total blocks from my house meaning walking halfway and then waiting for a bus still makes no sense.)
Do you think that people living in countries that are not car-dependent have all "perfectly aligned routes"? No, they use bicycles, they walk, they rely on better public transit than the one that is available to you.
But anyway, how does that relate to the original question? Are you trying to justify your interest in FSD with the argument that would you give you some network of autonomous taxis?
Sure. I thought I’d left the dots close enough, but perhaps I didn’t. I believe that full self-driving cars will bring benefits above an area just being less car-dependent in general because of their inherent random route ability, which represents a huge time savings over rail and buses which seem to run in a radial spoke fashion.
Take the case of an elderly person with limited personal mobility. They are likely driving today because that’s their best option by far. They’re probably not going to start walking to their doctor; it’s probably not a good idea for them to take up biking. Taking a bus into the city center and another one back out to get to a point that’s radially 2-3 miles away and then reversing that to get home seems wildly less practical than hopping into an FSD ride, whether billed like an uber or privately/family owned. That’s what FSD brings over a status futurus that is less car dependent but lacks FSD.
Please, please watch the videos I linked. Especially the Not Just Bikes channel. For example, this one https://www.youtube.com/watch?v=RQY6WGOoYis can quickly explain why a transport system flooded with autonomous cars and poor public transit would make the overall system worse for everyone.
No rational person can watch those videos and conclude that the best course of action to solve urban transportation requires self-driving cars. Getting FSD cars is just a rich nerd fetish which will not solve anything and likely contribute to make suburban sprawl worse.
I’ve watched all the Not Just Bikes videos in the past and just rewatched part of the first one you linked immediately above.
The Downs-Thompson paradox may explain exactly what I see. We recently got priority bus lanes in my city (Cambridge, MA). The primary effect is that buses can now more quickly travel to where I didn’t need to go in the first place. Cars are still significantly faster, so people choose them.
Regardless of the amount of car traffic, a bus that goes from the outer edge of my city into the center to let me transfer to another bus to go out towards an outer edge at a different radial will never come close to driving point to point, even with parking aggravations. That explains why nearly everyone who is capable to drive and can afford a car has one. I think it’s also why many public transit advocates seem to focus so much energy on penalizing cars to enable public transit to become competitive. If competitively superior public transit existed, people would use it because they aren’t generally stupid. Even if it was roughly equal, people would use it in a lot of cases.
I also lived in the Cambridge, and I am very familiar with the 47 bus leaving from Central. I also biked there a lot, and I can tell you that there were very few times where I needed a car. The only times I needed one was where I was going to the suburbs and the alternative infra is non-existant. For the majority of cases, the T + biking was more than sufficient.
It is not about penalizing car driving. It would be a good start if the US did not subsidize car ownership. It the true cost of car ownership was put on those driving, perhaps cities would be able to finance better public transit...
Garbage trucks (the article mentions the operator was working) and other service vehicles, including school busses, are expected to block travel lanes for extended periods and time, and humans are usually able to understand the situation and react accordingly in emergency situations.
Not familiar with sf or cities? Cars, delivery trucks, garbages trucks often stop in traffic lane for loading/unloading because there is literally nowhere else to stop. It’s annoying but expected and understandable.
If they do that prior to sirens showing up there is no issue. They can’t predict the future.
Suppose the travel lane had been blocked by any other obstacle, say some large object fell into the road. The Cruise car would have failed in the exact same way, there just wouldn't be an extra human in the loop to blame.
I wouldn't point the blame to the garbage truck, but I agree with your description of it as scare-bait. I would gladly accept random 25-second delays from the fire department in exchange for eliminating the 30~40k American deaths[1] each year due to traffic fatalities.
> does this happen more-often, or for longer-periods, with autonomous cars?
How would one go about finding that data? Is there an authority that tracks how often and for how long emergency vehicles are impeded by human drivers? You might get something along the lines of average time to first response on the scene.
> Human drivers sometimes fail-to-notice sirens or other high-priority demands on road capacity. But, an automated system could broadcast the planned-routes of dispatched priority vehicles to every autonomous car in the city, allowing the autonomous cars to preemptively clear paths, before it even becomes an issue of local-reasoning about an exceptional-situation.
This doesn't comfort me. We already live in a society whose leadership doesn't believe in net neutrality, I have a hard time believing that an automated driving system that programmatically prioritizes vehicles on the road won't be co-opted in the same manner to benefit those with capital at the expense of those that don't.
The autonomous car companies have a lot of the data, since they've run the same cars with drivers extensively. They can see the differences in rates-of-situations between cars-working-autonomously, and cars-with-drivers.
Given the long periods of operating with autonomy-plus-backup-driver, they also have stats on how often an in-car driver overrides the car, and exactly why.
Cities also can & should independently collect such data. Some big-city buses are already equipeend with a photo-ticketing mechanism for immediately recording, & citing, and cars illegally parked in their paths/pickup-area. Emergency vehicles should have the same.
> This doesn't comfort me.
Well, I can't make you comfortable if you've got free-floating paranoia about abuses by the powerful. There are many such abuses!
But I can point out that the exact same rational comparative criteria should apply before making a tangible policy decision about the use of the shared roads. We shouldn't be ruled by imaginative worries extrapolated from other insecurities, but rather real measurements of how often a "send a request to clea roads for emergency vehicles" is broadcast, whether each use has a recorded & verified legitimate justification, whether such broadcasts save lives versus not-using-them, etc.
> In the lane directly in front of them, facing them, and honking?
Yes! In SF and other big free-country cities at 4am, some human drivers are high!
Sometimes human drivers are passed-out or suffering a health crisis. Sometimes they've left their vehicles blocking key rights-of-way as they go somewher else for many minutes.
In San Francisco, I've seen human-driver-unattended cars block light-rail trains many times. (I live not far from the N-Judah line, which also has a history of pedestrian bystanders physically moving cars to clear the road: https://www.munidiaries.com/2014/02/10/muni-riders-lift-car-...)
We need to judge autonomous car suboptimalities against real human drivers, with all their failures - not against idealized, unerring humans-at-their-best.
The first 2 sentences of the article we're discussing:
> On an early April morning, around 4 am, a San Francisco Fire Department truck responding to a fire tried to pass a doubled-parked garbage truck by using the opposing lane. But a traveling autonomous vehicle, operated by the General Motors subsidiary Cruise without anyone inside, was blocking its path.
I don't think it's intolerable to use a straw-man argument when discussing scarecrows.
Autonomous car didn't seem to fail to notice the fire truck, just that it arguably took the wrong course of action by yielding to the right instead of reversing into the intersection.
You're missing the angle that when a human blocks passage for an extended period of time with their vehicle, it's almost always because the human is away from the vehicle: the human has deliberately parked where they are not supposed to and then walked away.
When the computer does this, it's always "sitting" in the vehicle, and is not acting on a selfish intent: it's just carrying buggy/incomplete requirements.
A human sitting in the vehicle is generally capable of responding to "sir, can you move your car?", or a horn from the fire engine or whatever.
But if the human-away-from-vehicle blocks things for minutes, while the temporarily-confused-AI has a remote-operator fix the issue in 30 seconds, or others in the area find another solution even faster than in the unattended-car scenario... isn't the advantage, given the actual weighted occurrences, still for the autonomous car?
> an automated system could broadcast the planned-routes of dispatched priority vehicles to every autonomous car in the city, allowing the autonomous cars to preemptively clear paths, before it even becomes an issue of local-reasoning about an exceptional-situation
These cars can barely drive themselves with hyper-accurate maps on a sunny day without any surprises, and we want them to receive and evaluate emergency dispatches from some central system that doesn't yet exist, and respond accordingly? I think we're a ways away from that.
The cars relying on external routing guidance, from cloud services similar to 'Google Maps' driving directions. An alert from emergency services can simply... remove roads from consideration, the same as is already routinely done when other accidents/traffic cause dynamic routing updates.
So, you don't even need to upgrade the autonomous vehicles, you just change the other, existing, flexible, proven system for recommending routes.
That central system is already far more real than you would think. I was amazed to download the PulsePoint app and immediately see all the fire/EMS events in real-time, with data on which units are responding/there/leaving. It’s not a huge stretch.
> the most important question here is: does this happen more-often, or for longer-periods, with autonomous cars?
The nost important question is - how do you recover. With every autonomous system, when it does fail, it is often catastrophic because there is no recovery path.
When a human clerk makes a mistake, you can talk to them, when a human driver stops in the middlw of the street, you can tall to them - when a computer crashes, you are tipically fucked.
As far as I can tell, none of the reported incidents in SF involved a "computer crash". Further, none have been as 'catastrophic' as other very-common accidents with human drivers.
So yes, the details of recovery are important – but only in comparison to the human baseline. If incidents happen less often than with humans, and/or recovery happens as fast or faster than with humans, that's the relevant criteria.
Also deserving weight: might autonomous cars avoid dangerous panics in some situations where humans sometimes panic & overreact? I've often seen human drivers make things worse by making an ill-advised impatient navigate-around. I've seen people trying, earnestly, to avoid a block or prior accident who – distracted, anxious – do something else wrong, causing another collision.
Weigh all the rates & severities. Don't set policy based on selective worst-cases, including many imaginary/theoretical contrived cases.
Wouldn't the apples-to-apples comparison be: how many human drivers have been in the path of a honking fire truck, yet refused to move? Including parked cars and other "obstacle" cases is imbalanced.
That's a fine micro-comparison to consider, alone!
But also, what if this is a rare situation where the autonomous car completely fails compared to a human-in-seat? But, even this failure still resolves in under-a-minute. Or, worst-case, approximates the (quite-common) case of a breakdown or human-accident blocking traffic, in a manner that takes tens-of-minutes to resolve.
In that case, who cares that we've found one narrow failure? Humans show impatience, poor-judgement, bad exception handling, recklessness, and so forth at ample rates across many road interactions. It's the net total of blockages by autonomous cars, in count & duration & severity, that matters, and specifically whether overall they do better or worse than human drivers.
That net comparison is the most important consideration.
Why would the only proper comparison be against a human driven car with a human in it rather than against all human driven car situations (including double parking)?
There was a double-parked garbage truck in the opposing lane of traffic. What makes this news is that the other party to the blockade was an autonomous car.
I do think this is a little unfair, but I think it points to a real issue "around" driverless cars (that is not the cars' fault exactly): everyone is trained to deal only with human agents.
It does seem like the car "acted reasonably" (so to speak) - but it still did not act like a person which is what the emergency workers were trained for. The challenge of integrating non-human actors into a previously human-only space is, I think, going to be more difficult and wide-ranging than we imagine. It will probably require a substantial commitment that goes beyond the self-driving car companies.
It's worth saying that a lot of the critique of self-driving cars is that...they were sold on the premise that they would integrate into "existing infrastructure" without big changes and critics have always felt that would not really happen. We've had self driving cars in special environments for...50 years now?
I would firmly say that it did not act reasonably. Blocking the other lane of traffic is the second worst action I can think of, only behind ramming into the emergency vehicle. It would have been better to ignore the emergency vehicle and continue driving, which would have at least cleared the jam.
I worked in the space at one point, and I would put the issue down to autonomous vehicles being trained entirely to keep themselves safe. E.g. they know to stay a certain distance from other cars, but they have no concept of "could another get around me?". A similar example would be parking. On a street without lines to mark spaces, I would not be surprised to see an autonomous vehicle park in the middle of a space large enough for 2 cars, preventing someone from using the rest of the space. Another would be opening space for someone to merge. I don't think that's something they ever do on purpose.
We're teaching them to not hit things or get hit by things, rather than teaching them how to drive, which is a far more nuanced skill. The same thing would have happened with 2 autonomous vehicles, because neither even attempts to understand what the other is trying to do. They would just deadlock until the garbage truck moved.
> It would have been better to ignore the emergency vehicle and continue driving, which would have at least cleared the jam.
With the fire truck using the oncoming lane to overtake the garbage truck, and with the articles saying a human could have reversed back into the intersection to clear the lane, it sounds like the fire truck would be in the way of the autonomous car just driving forwards.
> The same thing would have happened with 2 autonomous vehicles, because neither even attempts to understand what the other is trying to do
I don't think this is true in general - they seem to rely heavily on judging the intent/target of other vehicles to predict future path and react to those possibilities.
Likely that the autonomous car A (in the position of the fire truck) would not overtake the garbage truck when it can see car B oncoming in that lane in the first place, but in the event that it does occur, car B would likely slow to prevent a potential accident (understanding car A is attempting an overtake and may continue forward) and car A would probably pull back in behind the garbage truck (understanding that it'd be in the way of car B continuing forwards).
> I do think this is a little unfair, but I think it points to a real issue "around" driverless cars (that is not the cars' fault exactly): everyone is trained to deal only with human agents.
> It does seem like the car "acted reasonably" (so to speak) - but it still did not act like a person which is what the emergency workers were trained for.
I understand where you're coming from, but I disagree.
There's no reason to expect self-driving cars to play by a looser set of rules than human drivers in the same arena.
If a self-driving car fails to behave properly on the road, that's a fault of the self-driving car.
Blaming everyone else for expecting a car on the road to behave like any other car on the road feels like victim blaming. We shouldn't have to drive around guessing which cars are human-driven or self-driven and change our behavior accordingly. There's no way that's going to work.
Fire truck wants to use oncoming lane to overtake the double-parked garbage truck, autonomous car in that oncoming lane yields to the right to the extent allowed by more parked cars - but doesn't back up into the intersection to completely clear the lane.
> According to Cruise, which collects camera and sensor data from its testing vehicles, the fire truck was able to move forward approximately 25 seconds after it first encountered the autonomous vehicle.
In a similar situation (where having to back up is the only option), I don't think a human driver would have responded much faster. Of course, this was with human intervention also.
Odd that we are focusing on the autonomous car that had no choice but to back up and not the double parked garbage truck that induced the need to backup in the first place.
Does your point appreciably change the meaning of the argument? Definietely not, even a 10-second delay is too long for emergencies. There are times where that five-second difference matters but I feel this falls into a bad-faith argument.
"Bad Faith" is when someone is consciously trying to deceive you. It's not a superlative like 'really' or 'very'.
If the point someone is trying to make is bad, inaccurate, or unconvincing that doesn't mean it's made in bad faith. It's actually less likely to be in bad faith.
While there are people who are indeed nitpicky, unfortunately I do think that it crossed the line. The specific time is not the main message: the point is that extra time hinders emergency response, and it can be reasonably interpreted that "half a minute" is a granular measurement.
An average pilot takes ~4 seconds from noticing an engine failure to when they respond. I don't know how long getting out of the way would take an average driver in this situation, but I'm guessing that its at least 15 seconds.
People are nitpicking your nitpick, but the distinction really does matter in this kind of scenario where the argument is that "seconds count".
Double parking is extremely common for a garbage truck in that situation (picking up along a crowded road).
And yes, I absolutely would have checked my rear view mirror, then reversed (slowly).
Or tried to move over as much as possible to the right.
Or depending on the road and my position, U-turned and pulled in front of the garbage truck. Lots of options, blocking the firetruck not being high on the list.
You claim this, but have you ever been in that situation before? Backing up on a road with traffic is extremely dangerous, and I'm not sure I could do it while under the stress of an emergency vehicle situation. There was no room to U-turn (if there was, the fire truck would have been able to get through), and I'm guessing parked cars on the side of the road prevented any kind of "moving further to the right".
Yes, I have backed up for emergency vehicles in the past. (and gone through red lights if necessary and pulled all the way off on shoulder and crossed double yellow line... - all of things an bot might not have done). I've been driving for 3 decades, most of it in an extremely congested area, so it happens.
There might have been room to U-turn for the automated vehicle. Bit of ascii art here
===DUMPTRUCK==
BOT===FIRETRUCK
Bot, in that situation can U-Turn. Perhaps maybe back up a metre or two first (let's pretend there was another car or two there but there was a metre or two to backup - although if bot slowly backed with hazard lights on and honking a car behind them hopefully would have too - maybe no U-Turn necessary). Yet there would be absolutely no way for a (large) firetruck to get through. Point is, require's a human's full awareness of world and flexible thinking and when it is appropriate to break the rules (rule of arriving at destination, rules of road conduct). These bots still don't have the neural complexity of a crow, much less a human (probably not even that of a goldfish really). They are ok at the predictable but that's it.
I have no doubt they'll get there eventually, at which point we might be having another ethical conversation about what a AGI should be allowed to do with its life, but they aren't there yet...
It is a two lane road constricted with packed street parking on both sides of the road. This being SF, even in the best situation where there isn't a garbage truck double parked in the other lane, I'm sure a U turn isn't feasible, or even a 3 point turn around. Backing up was the only option, yet the road is two lane, so more traffic than a residential one lane coming from behind is also a concern.
That's just what garbage trucks do, pick up garbage. Typically, they need to stop in order to achieve that. Thankfully, there are two lanes, so with some negotiation, humans can work around that fact...
Doesn't excuse this at all but I've seen some pretty stupid human driver behaviors with emergency vehicles and blocking intersections in general. A rule that says don't pull into an intersection you will not be able to clear (!!) would solve a ton of issues.
We have that rule in Germany (§ 11(1) StVO). Believe me, it doesn’t help. Everyone ignores it and blocks the crossings nonetheless. It doesn’t get sanctioned anyways.
German drivers generally do a good job of getting out of the way of emergency vehicles, but they're also incredibly aggressive drivers (rushing up to red lights, making turns within inches of pedestrians) who also routinely break the rules.
Like I don't think I've ever seen people driving on the sidewalk in an American city, but it's a constant in Berlin. During a recent fair (Neuköllner Maientage) we had tons of people driving into the park and just parking on trails, including right in front of benches. Even coming from another country with an aggressive, entitled car culture, that was breathtaking behavior.
The trend of autonomous cars is one of the things that I believe is not practical. Even the most sophisticated machine learning algorithm requires repetition and constant input to be optimized. But there is infinitely many variables when it comes to driving on a street level.
Driving is intuitive on a city street. The scope of autonomous vehicles should be limited to places where there is clarity of transit and very few variables. I would be comfortable if a fleet of autonomous vehicles driving 200 miles/hour on a a dedicated side on a highway but not in a city or even an empty suburban neighborhood. I think for those intuitive situations the vehicles could even be driven remotely by a real person in a data center.
> Even the most sophisticated machine learning algorithm requires repetition and constant input to be optimized.
Self driving cars are not machine learning algorithms. They have machine learning parts in their programing but nobody reputable would hook up a neural network to the pedals and the steering and just let it rip.
You simply have a bad mental model of how these machines are built and it shows.
There are three big questions every self driving car has to answer: Where I am? What is around me? What to do next?
To reliably answer the “where” question you don’t just trust one sensor. You use multiple sensors. Yes, that includes a gps, but also cameras and a lidar and a radar too. It is not machine learning. If you are interested how this is done you need to look up topics like iterative closest point matching, multi-view geometry, kalman-filtering, bag-of-visual-words representation, and bayesian reasoning.
To reliably answer what is around the car you likewise use all of your sensors. Yes, this has bits of machine learning in it, but it is not only machine learning. You can and do a lot of model-free perception. For example if your laser is bouncing back from somewhere then there is something there. There is a lot of published literature on sensor fusion and tracking and prediction. These algorithms are not magic, and they are not machine learning. They are just what you also would come up with if you would think about the problem hard for 8 hours a day for tens of years.
Then there is the planning. These are not machine learning based algorithms. They usually generate a bunch of different plans, then cull the unsafe ones (for example the ones which would collide with a tracked object) and then you rate the plans according to some heuristic and choose the best, then repeat. It is not magic. There is a serious craft and engineering to it. You can read a good introduction to different approaches in Lavalle’s Planning Algorithms book if you are so interested.
"Places where there is clarity of transit and very few variables" sounds like railroads. High-speed rail is, in my opinion, a much more equitable and efficient use of infrastructure than letting private vehicles only owned by those who can afford to buy them fill up a private lane.
> The scope of autonomous vehicles should be limited to places where there is clarity of transit and very few variables.
This is precisely why Tesla Autopilot has such a good "safety record" - it only works on roads where people don't have accidents anyway. It works extremely well on motorways where everyone is travelling the same direction at the same speed and everything behaves extremely predictably. The moment anything crazy happens it hands off control to the human driver.
People compare this "x thousand miles without an accident" behaviour to the sum total of human driving, including twisty country roads and cities, but it's not a fair comparison.
>could even be driven remotely by a real person in a data center.
doesn't this totally defeat the purpose of driverless cars? a remotely operated car is not the same thing as driverless. the company operating the car would still have to pay for a human to do work. also, as cool as remotely operated sounds, there are so many ways that is worse than a human in the car. for starters, peripheral vision is lost in remote.
Most people (including me) think they're better drivers than average. I think I'm probably in the top 10%, at least. Whether or not that's actually true doesn't matter for my argument.
All that considered, I need autonomous vehicles to be MUCH better than average human drivers to consider riding in them. They need to be better than I think I am, so maybe top 5% or so of talent on the road.
1% better than the average human is still an awful driver IMO.
This is an unrealistic take. There are many other differences that you need to consider. For example, what is the nature of that 1% difference in tragedies? Perhaps there's a 1% improvement in overall fatalities but also an increase in very gruesome accidents. Technologies that are rapidly adopted often need to be 10x better than existing solutions, otherwise adoption is a slower uphill battle.
>It just needs to be 1% better than human drivers. If the net result is 1% less tragedies - sign me up.
Say you have a child and want to send him to school, would you chose
1 drive him yourself, knowing you have 10 years of experience with zero accidents and you are responsible and care for your child
2 use a lottery that will give you a random driver, could be a 16 years old teen with 1 day experience, could be a 80 years old elder or a drunk or tired dude or a super driver
3 use an AI , is better then the drunk dude , 1% better then the teen but worse then you and much worse then a professional driver
I am curious why would chose 2 or 3 if 1 is an option.
3
>Not everyone has option 1 available to them. It’s why school buses exist.
- Not everyone has access to school bus options
- not everyone can drive
- not everyone owns a self driving car
....
I would prefer if we stay on topic and answer at in Hypothetical question.
Or if you have a bias, let use a different domain
Your child is ill and you call for a doctor, but in this universe doctors are a lot of time drunk or are allowed to practice with zero experience, whyat do you chose
1 you call your family doctor that you trust and is not a drunk
2 you call the AI doctor that is terrible compared to a normal doctor but is better then the drunk doctors
3 you realize that it would be so much cheaper then creating an army of AI robot doctors to just fix the problem with drunk doctors, and maybe the fact you are an AI enthusiast won't prevent you think that comparing an AI with a drunk doctor is stupid, that we should have better stats then is better then the worse.
Chill out. I never said I was an AI enthusiast. Maybe don’t assume things because someone made a small critique about your comment? I was simply responding to your question of why people choose 2 or 3 when 1 is an option. Because it’s not always an option. You even admit that at the top of your comment. Which leads me to ask: what are you trying to prove here?
I had a question, if you don't care about answering then please ignore.
I am attempting to show that 1% better then a terrible calculated average is bad, it could be great for someone that is a bad driver. My problem is that averages are dragged down but drunk driving or speeding, I am fine if we implement in parallel multiple solutions, like use tech/AI to prevent illegal driving (drunk, tired, texting) and in the same tiem continue the research on actual good AI drivers that are actual good at driving.
Better regulations and fewer huge SUVs or trucks with massive blind spots would result in more than a 1% drop. Better public transport would, as well. Sure, it’s not shiny new technologies…
That has downsides. Autonomous driving needs to be 1% net better. Less driving has up sides, but also downsides, and it’s not clear how to do a net comparison there. But yeah, if it is net positive, we should do that too.
That's the engineering argument. The societal expectation, however, is that it's 100% better than human drivers, as in, it must not cause harm, ever. The reality is that the vast majority of folks won't accept autonomous vehicles if they don't meet this high bar.
There are many long-term risks with the rise of self driving cars, but this doesn't seem like one of them. It was particularly novel to the fire department because it was a self driving car, but practically no different than a double parked car that a driver isn't in (which I do see from time-to-time).
Long-term this seems like a very solvable problem given the ability to remotely operate the car.
I think it's going to prove a challenge for these remote operators to connect to a car, with no situational awareness, and then quickly determine the correct course of action remotely.
I wonder if these companies have tested these scenarios, and their remote drivers. Do they do any testing at all? Do they do "check rides" with these remote drivers? I'd really like to know what side of this entire operation actually looks like.
I've seen with my own eyes an Argo AI car not pulling to the side for a Pittsburgh police car in downtown Pittsburgh on Liberty Ave. The police car had its lights blaring and was forced to go around via the oncoming lane. It was awkward because every other car on the road made way for the officer, but Argo AI did not.
Isn't the dilemma here that these experiments are being done on far too big a scale for the low skill level the cars currently exhibit, and yet if you take that to the companies behind them they would all argue they need more cars to gain more ML experience in order to have a better outcome.
The driving capabilities are impressive from a tech perspective but they're pretty poor in comparison with average or even fairly poor drivers. Given these limited capabilities, the cars are obviously still in an early R&D stage and thus the companies running them ought to be giving far more priority to making the outward impact much less. How do they do that? Staff the human "take over" driver pools at far higher levels, so the take over is always smooth and rapid. Right now it's like those useless chat boxes on websites where they ask you to chat and then find they've got no human to engage with you, as they multiplexed excessively.
Bringing the number of humans down once the systems are good enough is fine and that's how they'll get their commercial rewards but doing it now is premature.
I feel like part of the difficulty here is that if one Cruise car responds this way, it means that every Cruise car responds this way. So whereas you might have a few dumb human drivers who can't handle this situation and occasionally run into issues, if this particular Cruise car were mass deployed, you'd run into it frequently.
On the other hand, it's possible that while the vehicle couldn't act correctly in this particular circumstance, it would act correctly in many similar circumstances, and thus the issue would still be rare.
My first reaction to this was horror on seeing the title.
But... I rode an ambulance in the front seat transporting a family member in the back. Several cars drove in front of the ambulance in intersections, completely ignoring the siren. As we drove a several lane road that fed into the intersection was blocked by cars except for one lane, but it was too small for the ambulance to get through, only until the ambulance was stopped did the cars bother to make room. They are supposed to clear out well in advance. One even stopped in front and the ambulance drove around it. I commented on it, and the ER driver said that is how it is. This was late at night, in some traffic in southern California suburbia. I know cars make room in the sunlight. There were traffic cameras, likely police too; no one cared to enforce the law.
Not a complete blockage like that firetruck dealt with for 25 seconds on a small road, but if the software is made to respond to ER vehicles, would it perform better than humans 99% of the time?
I lost a bit of my faith in humanity seeing that one night-they could have easily made room. If it was during the day I think they would have acted better, while I believe a properly licensed and mature autonomous vehicle would have made room long before that day or night in most situations, and a human can take over in the rest like it did in that article.
Interesting. Emergency vehicles already interact with automation and have done so for a very long time. For example they have strobes that communicate with signal lights to make them turn green for the fire truck. It seems more like the ball was dropped here as the same automation could have been integrated into the autonomous vehicles to make them get out of the way. Not just the strobes but detecting emergency vehicle lights, sirens, etc.
From what I can tell, it detected the emergency vehicle and yielded to the right. Just that it didn't reverse back into the intersection to clear the lane completely.
The article (which I bother to read before commenting) indicated that the only way to clear the way would have been to drive in reverse. I don’t know if a human driver could have been expected to do any better. The real problem was the double parked sanitation vehicle.
I’m not a fan of tech companies doing their beta testing on public streets, but this doesn’t seem like a egregious incident despite the delay it caused.
> I don’t know if a human driver could have been expected to do any better.
The problem is you can't communicate with an autonomous car. Even if the human driver doesn't know what to do you can indicate them to move in the right direction.
Research shows that ambulance lights and sirens minimally impact response times. Perhaps the fire truck would have arrived faster if it had yielded to oncoming traffic, or if the garbage truck had itself been autonomous.
The most important point is NOT a single car doing something, but what a flock of such cars can do once commanded from remote. We have already seen theoretical attacks against cities bricking few cars in strategic spot, trapping 99% of the traffic, attacks against electricity grid with spike loads or spike drops, individuals pushed to their death apparently by accidents or pushed toward others in an apparently terrorist action etc.
So the real point is: do we damn want to allow proprietary hw and private made/designed such complex tools instead of imposing publicly developed FLOSS code on open hw made out of public research?
Artists have envisaged tragic scenarios from Terminator to Matrix passing through Gattaca and Minority Report, techies have read much more substantial and realistic nightmares, civil society answer? Not seen so far...
1. When autonomous cars are numerous enough to pose a statistically significant issue for first responders, the lives they will save by preventing traffic accidents will absolutely outnumber the possible damage from an odd edge case like this one.
2. If you're still worried about it, perhaps the solution is to require manufacturer to make some kind of override for autonomous vehicles, then give it to first responders like fire, medics and police.
Why would it stop in the travel lane? In this scenario clearly the best choice is to go into a driveway if there is one nearby, next best choice is to back up, and if neither of those are possible just move forward.
I see no justification for stopping in the travel lane. Did it think it was being pulled over?
Looking at diagrams of the situation, "not stopping at all" means that the fire appliance would have had to stop, allow the autonomous vehicle to get out of the way, and then set off again. Not ideal, for sure, but "less worse" than just being blocked.
> a San Francisco Fire Department truck responding to a fire tried to pass a doubled-parked garbage truck by using the opposing lane. a traveling autonomous vehicle (...) was blocking its path. While a human might have reversed to clear the lane, the Cruise car stayed put (......) Tiffany Testo, a spokesperson for Cruise, confirmed the incident. She said the driverless car had correctly yielded to the oncoming fire truck in the opposing lane and contacted the company’s remote assistance workers,
So it sounds like the car was driving perfectly fine, but stopped when a car was coming at it head on and it didn't reverse.
I think the real issue here is a double parked garbage truck forcing the fire truck to go into oncoming traffic in the first place. Maybe there's a need for a loading only zone there so that the garbage truck doesn't have to block the travel lane.
> I think the real issue here is a double parked garbage truck forcing the fire truck to go into oncoming traffic in the first place. Maybe there's a need for a loading only zone there so that the garbage truck doesn't have to block the travel lane.
I disagree. Driverless vehicles NEED to be able to handle these unexpected situations in an appropriate manner. That they don't currently is in fact a "real issue" and in this case at least is evidence this tech may not be ready for primetime.
> Driverless vehicles NEED to be able to handle these unexpected situations in an appropriate manner. That they don't currently is in fact a "real issue" and in this case at least is evidence this tech may not be ready for primetime.
Personal opinion here, but I think defaulting to "stop and contact support staff" is as good of a "fix" as we will have for a long time. According to Cruise (so potentially biased) it took 25s for the firetruck to be able to move after the scenario was encountered, though the article says the garbage truck moved.
I mean, not backing up into cross traffic at the intersection is also a good thing. I’ve seen accidents where cars got out of the way of an emergency vehicle only to crash into another.
Sure, but did this car have traffic or an intersection behind it? If so it's a non-story. Given the double parked garbage truck I think it's safe to say they weren't near an intersection, so cross traffic is unlikely. The car should have been able to look behind it, see the clear path, and move out of the way. Heck the whole supposed advantage of self-driving is it can watch in all directions at once.
This outcome is acceptable to me, though I doubt the assertion. I don't think we are good enough at AI yet to put cars on the road with any amount of data. When we are I think we'll be capable of training them without having to put vehicles with dangerous behaviors on the road to gather data.
The real issue is that not every situation can be perfectly modeled, real human systems aren't (and can't be!) perfect, so sometimes you need to think outside of the box and with human priorities.
It stopped because it recognized that it needed to stop to let an emergency vehicle get past it. Unfortunately, it stopped right next to a double-parked garbage truck in the next lane. No one had programmed in a set of contingencies for "stopped to let something pass, no room for it to pass". This is one of who knows how many contingencies it doesn't have a solution for which is why, in my opinion, this kind of thing will be quite common for a long time until enough events like these are found and software is changed to handle them. I'm hoping that none of these kinds of unanticipated events will cause fatalities, but I'm skeptical that will be the case.
> I think the real issue here is a double parked garbage truck forcing the fire truck to go into oncoming traffic in the first place. Maybe there's a need for a loading only zone there so that the garbage truck doesn't have to block the travel lane.
If there’s a discussion to be had here, it’s not through the frame of the needs of driverless cars. Self-driving has to work in the driving environment that exists today to be viable, and double parked garbage trucks are part of the regular order of that driving environment.
When the emergency vehicle is delayed, what percentage of the delay is Cruise’s fault and what percentage is the garbage truck’s? I assign less than 50% to Cruise.
But look at it this way: one was a vehicle with no driver present in the seat, and the other was a vehicle in which a driving agent is never not present. A human driver in that situation should not spend longer than 5 seconds figuring out and implementing a way to safely let the fire truck pass. The Cruise car took longer.
If a human were present with two working eyes and ears and armed with good judgement, they may never have been blocking the way in the first place because a fire truck in SF is blaring it’s horns while it charges to the situation it was called in to answer so it is entirely possible that seeing the garbage truck parked up ahead, they would have opted to not even continue driving because that looks like an obvious choke point for an emergency vehicle trying to pass. Alas, I cannot reasonably expect that kind of situational awareness and judgement from an average human driver so maybe that’s also too big an ask from Cruise.
Also note this from the article:
> The fire truck only passed the blockage when the garbage truck driver ran from their work to move their vehicle.
So in the end, the Cruise vehicle was not the one which allowed the fire engine to pass but a hurried garbageman moving his truck after getting back in the driver’s seat. 50/50 may actually be too generous to Cruise here.
> A human driver in that situation should not spend longer than 5 seconds figuring out and implementing a way to safely let the fire truck pass. The Cruise car took longer.
Impossible to say for certain, but I'd bet that a significant number of human drivers would yield to the right (as the autonomous car did), or take longer than 5 seconds to reverse back into the intersection (as was desired of the autonomous car).
If there’s no vehicles or people directly behind the car, no traffic close enough to pose a danger, and a quick spot check confirms this, why would it take you longer than 5 seconds to reverse out of the way? You don’t need to back up into Canada or something, a short distance is enough to let another necessarily impatient vehicle pass. In the end the garbage truck moved, which is why the fire truck was “able to pass within 25 seconds of encountering the Cruise car”.
5 seconds is a very short amount of time. Humans are pretty bad at quickly solving for and safely executing an unusual response to a scenario they’ve never encountered.
I would hope that after coming up with the strategy to reverse that you’d take most or all of 5 seconds to confirm there were no obstacles behind you. It may take you 2 seconds to realize the fire truck is stuck unless you move, 2 seconds to conclude you can only help by reversing, 3 seconds to check your mirrors, 0.5 seconds to shift to reverse, a couple seconds to look at the backup camera to check the spot not visible to the mirrors, 2 seconds to move the car 40 feet, longer if you have to check for cross traffic in the intersection.
Aviation research points to the effect of startling to have a negative effect on decision making quality and speed for up to 30 seconds. That’s for trained, generally fit aircrews.
Quibble with the times, but I doubt you get to a “most people would complete the task safely in under 5000ms” conclusion.
> If there’s no vehicles or people directly behind the car, no traffic close enough to pose a danger, and a quick spot check confirms this
No traffic close enough to pose a danger isn't a given in this scenario - there may well have been traffic.
> why would it take you longer than 5 seconds to reverse out of the way?
I think many would be at least slightly hesitant to back into an intersection. There would be initial reaction/braking time, time to check for and evaluate options, then time to reverse backwards with caution.
> In the end the garbage truck moved, which is why the fire truck was “able to pass within 25 seconds of encountering the Cruise car”.
The autonomous car did yield to the right to the extent it could, so wasn't doing nothing for 25 seconds. Just that (I'll go with the first responders' judgement) doing so didn't give sufficient space.
Pull out a stopwatch and let it go up to 5 seconds.
Did that feel like too short an amount of time to watch your mirrors, and throw your car into reverse while making sure the hazard lights are on? The possible dangers at 4am are other cars (and you probably already knew if someone was driving behind you but you check anyway in case someone turned into your lane), people (who probably also heard the sirens and need to be yielding themselves) and animals (who are probably and hopefully not around anymore because fire engine sirens are extremely loud and scary). So you back up while watching your mirrors the entire time. By second 5, even if you go slowly, there should be space enough between you and the garbage truck for a fire engine to negotiate its way around you, and if there are other vehicles around but not also trying to get out the way, then congrats, you attempted to do your civic duty but the other guy failed. You’re absolved of your sins.
If that’s what happened, SFFD wouldn’t have complained to the City about this exact situation in which they described the three vehicles present: theirs, Sunset Scavenger’s and Cruise’s. Yielding in the way is not the correct maneuver when it doesn’t allow the fire engine to pass. Doing the maneuver that gets them through in the shortest amount of time without endangering anyone else in the process is, and it isn’t a complicated one, just un-fun and you shouldn’t be out-paced by someone who is literally outside their vehicle at the start of the situation when you are a piece of software that can never leave.
Have a passenger film you driving a trip across the city. At some random time while you’re stopped, have them unexpectedly scream at you to back up 40 feet right now. From the moment they do that, check behind you, turn your hazards on, shift to reverse, and safely back up 40 feet.
Check the video and I bet you’re well past 5 seconds when you complete the maneuver.
Seems like the headline should be "A Double-Parked Garbage Truck in SF Blocked a Fire Truck Responding to an Emergency." But I'm guessing that would get approximately 0 clicks.
Sounds more like nobody put this particular user story onto the backlog and marked it as required before going to test rather than an edge case a computer can't handle. The city should probably demand compensation.
It’s funny reading “autonomous car” and “self-driving car”. We can be certain that will eventually sound as silly as the old name for cars “horseless carriage”.
I don’t know. “Car” is short for carriage. Arguably, horse-drawn carriages had certain levels of autonomy. An “automobile” focuses on the automated, or mechanical, nature of the conveyance. These days, “auto” is usually focused on mechanical thought. I imagine that we might just collectively redefine one of our existing terms once it is not so ambiguous anymore.
I don't see the article, or quoted sources, even trying to make that comparison - so this is really only a half-story, compared to what's relevant.
Further, given the remote-guidance possibilities with autonomous cars, it's plausible to think they'll eventually be far, far better than human-driven cars at making-way for higher-priority traffic.
Human drivers sometimes fail-to-notice sirens or other high-priority demands on road capacity. But, an automated system could broadcast the planned-routes of dispatched priority vehicles to every autonomous car in the city, allowing the autonomous cars to preemptively clear paths, before it even becomes an issue of local-reasoning about an exceptional-situation.