Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Cruise and Waymo say city officials have mischaracterized their safety track records. Their driverless taxis, the companies say, have lower collision rates than human drivers and public transit.

This is comparing the mean driverless incident rate to the mean human driver incident rate. This is disingenuous, since a small minority of human drivers cause the vast majority of incidents, thereby severely inflating the mean human incident rate.

Today's self-driving cars may be safer than the mean human driver, but I would wager they are far from the median human driver, and absolutely nowhere close to the top 10% of human drivers. It may be possible (but very difficult) to beat the median human with dramatic improvements to current self-driving algorithms, but beating the top 10% of human drivers will require AGI.



>This is comparing the mean driverless incident rate to the mean human driver incident rate. This is disingenuous, since a small minority of human drivers cause the vast majority of incidents, thereby severely inflating the mean human incident rate.

Why is this disingenuous? Unless you're also for advocating for banning those top 10% of drivers from driving, you can't just cherry pick only the good drivers.


Banning those drivers does make sense, though. Repeat offenders (speeding, dangerous driving, etc) should just have their license revoked, right?


> unless you're also for advocating for banning those top 10% of drivers from driving

Yes maybe they should be at least suspended from driving for some period of time. The problem that won't happen because we don't catch them all.


Mean is the relevant metric, though. If it's only the bottom 10% of human drivers who cause all crashes and driverless cars are as safe as the 20p human driver, you can replace all human drivers with autonomous ones and eliminate all crashes.

That's to say nothing of the fact that driverless cars will improve from their current level, and the fact that companies operating them can be held liable for crashes and forced to account for their actions in a way that the worst human drivers cannot (which provides a strong incentive for them to improve).


you can more easily prevent the bottom 10% from driving by more rigorous driving standards / more stringent revocation of licenses.


> According to the NHTSA, 19% of motor vehicle fatalities involved drivers with invalid licenses. Furthermore, drivers with invalid licenses comprise 13% of all drivers in fatal crashes.

https://www.carinsurance.com/Articles/driving-without-licens...

I'm all for more rigorous driving standards. However, the USA is car centric and people need jobs.

To really do what you're suggesting, the USA would need to regulate car companies to validate driving licenses while the car was in use. AND THEN, the USA would need to figure out what to do with all of these upset newly unemployed and aggressive people.


It's much worse in SF. When the SFPD runs (ran, really, because they no longer do traffic ops at all) crosswalk stings, the majority of drivers who violate the crossing pedestrian's right of way are unlicensed. Losing your license in California has virtually no influence on whether you continue to drive.


Yes, car-centricity is a big problem, I agree (as someone who doesn't own a car or regularly drive...).

Are robo-cars going to prevent people from driving with invalid licenses?


> Are robo-cars going to prevent people from driving with invalid licenses

Yes. Because they wouldn't be driving.


unless human-operated vehicles become banned or comparatively inordinately expensive, I'm not sure how that would happen? At least any time soon...


Vehicles in general are inordinately expensive.

Lyft just quoted me: $20 for 3.5 miles, in 15mins. That is a reasonable daily commute. 2 * $20 = $40 per day round trip. $40 * 5 = $200 per week. $200 * 50 = $10,000 per year.

For a 45 min commute: that is $30k per year.

Clearly should buy a vehicle.

However, with 10x cheaper Lyfts: at 1-3k per year; that is the cost of maintenance, registration, and insurance. Then the car payments on top of that... Owning a car would be inordinately expensive.

This doesn't even include the benefit of getting 45-90 mins back to learn or play while commuting.


I don't think Lyfts get 10x cheaper with robocars. It's not like 90% of your fare goes to driver net profit. Maybe a factor of 2, but I'm not even sure that will be true (robocars will generally be more expensive, and require additional remote monitoring).

https://irle.berkeley.edu/files/2020/07/Parrott-Reich-Seattl... suggests that net driver pay is less than half of gross pay, which doesn't account for the portion that Lyft takes. Taking that at face value, maybe it gets 40% cheaper? Driving is expensive!

There are some economies of scale in maintenance/procurement (but at the same time, you need all-new, likely more expensive vehicles than the mean Lyft driver), and for infrequent users who pay for parking then that can make a big difference, but unless the robocar operator makes no profit, it's hard to imagine robocars being significantly cheaper than owning a car if you are using it regularly.


Couldn’t we just take taxi prices in the developing world as a proxy for Robo-taxi costs? The cars are more expensive (higher taxes), gas is more costly, but driver salaries and maintenance costs are cheaper. Then say an 80-100 RMB one way commute cost sounds more reasonable for Robo taxis (so let’s say 200 RMB/day = $28). These costs are conservative, in 2016 I was only spending 100 RMB a day on taxis in Beijing, which would have been my parking costs if I drove instead.


>https://irle.berkeley.edu/files/2020/07/Parrott-Reich-Seattl... suggests that net driver pay is less than half of gross pay, which doesn't account for the portion that Lyft takes. Taking that at face value, maybe it gets 40% cheaper? Driving is expensive!

I'm not sure what expenses goes into calculating "net driver pay", but based on a quick skim there are some questionable items in there. "Exhibit 28 Total Seattle TNC driver expenses" lists stuff like "health insurance costs" and "independent contractor taxes", which obviously wouldn't be needed for robotaxis. If we drill down into "vehicle operating costs", there are also some questionable items, like $1560/year expense for "cellphone". I agree that robotaxis being 90% cheaper is unlikely, but 66%-75% cheaper (ie. a quarter to a third of the price) seems to be within the realm of possibility.


certainly whatever telemetry system is used in robocars will be more expensive than "cell phone," but yes would save on tax and health insurance.


> I don't think Lyfts get 10x cheaper with robocars. It's not like 90% of your fare goes to driver net profit.

Ok let’s do this one more time:

Lyft currently takes 25% of the fare. The driver and the car take the other 75%.

Lyft’s 25% includes R&D for building the platform, support issues, and solving the chicken and egg problem in new launch regions. ie Lyft operates at a loss in a new city to attract drivers so riders can start using the app. This loss is roughly made up for by profitable cities like LA. 1. There is no chicken and egg problem with robotaxis. It’s just a matter of capital cost and ROI time horizons. 2. The robotaxi will be more consistent and as reduced support costs. 3. The platform will have been built, and limited R&D for KTLO. 4. Competition in the space between large robotaxi fleets will push down margins, reducing overall profits.

It’s very easy to see Lyft only requiring 5% rather than 25% to continue being profitable and competitive.

Meanwhile, regarding the driver and car: Robotaxis require roughly $10-20k more in car manufacturing, but can also be used 24/7 so better capital utilization on the initial investment for the car (it really is hard to imagine the worth of a car that is used 99% of the time rather than parked 95% of the time). Maintenance costs reduce with scale (+in house mechanics to improve margins). Maintenance costs reduce with electric drive trains. Maintenance costs reduce with better initial manufacturing (to optimize overall car costs rather than optimizing for initial sale costs to sell to individuals).

Increased scale can mean reduced margins i.e. Amazon. Vertical integration also yields better efficiency: When robotaxis have been real for a decade, car manufacturers will build and run robotaxi fleets. Unlike today, vertical integration of manufacturing, maintenance, recycling and operating the fleet of cars creates a lot of consistent demand and can bring significant economic advantages. Even the car manufacturers goals change. Rather than optimizing to sell cars (have them degrade and sell more), cars will be optimized for durability, maximum materials reuse, lowest variable cost per ride, ergonomic rides. A professional driver can put on 25k-50k miles per year[1]. Compared to 10k-15k miles per year on average[2]. Meaning professional drivers hit the car’s mileage limit, 200k-300k in 6-8 years. Already it would be better for them to have cars that are optimized for cleaning, maintainability, and longer lifespans. And this doesn’t even talk about lifespans of wheels or the expedited cost of maintenance. Taking this math further: This is targeting 40 hour work weeks for professional drivers. If we targeted 75% (a lower bound) of the total hours in a week 168, we see robotaxis will drive 3-4x the miles of today’s Uber drivers. Setting lifespans of 2-4 years per taxi. Large robotaxi fleet operators will likely become manufacturers but regardless they will change the mental model of how car manufacturing currently operates. Robotaxis will not only absorb the profits of the taxi industry but also the profits from the car manufacturing industry, car maintenance industry, car rental industry, last-mile delivery (including food delivery) industry, rental housing industry, and more. Fleet operators will also optimize the cars for most recycling ability. Just look at the trend with a manufacturing company like Apple. Metal frames not only look good but also recycle better and are better for business as a whole. Apple has an iPhone tradein program because recycling materials can be cost effective for them. The scale will eventually be unimaginable, allowing the margins to be astonishingly low.

[1] https://www.quora.com/How-many-miles-does-a-full-time-driver...

[2] https://www.carinsurance.com/Articles/average-miles-driven-p...


Utilization levels will be driven by peak demand, as the usage patterns are highly nonuniform so I don't think they'll be that different. Also remember that even during peak hours, cars will be traveling between fares, potentially half the time (since demand is often mismatched during peak hours). Because most of the costs scale per mile, I don't think the immense cost savings will materialize.

Now robobuses is an idea I can get behind, since labor costs do dominate there.


> Utilization levels will be driven by peak demand

Exactly! Just like a data center, the economics are gonna work out very well for high scale medium returns and over time: minimal returns.


If we had 10x cheaper Lyfts (and that's a big if) then they will likely cause more fatalities than humans even if somewhat safer, on account of a lot more private vehicle urban miles


About a month ago a human driver ran into me while I was riding a bike and sped away (luckily I was unharmed, aside from a scrape or two). How exactly would stricter driving tests or revocation of licenses have prevented that?

If a Waymo had done that, on the other hand, it'd have stopped and I'd be getting a nice big check from Google.


traffic enforcement cameras (e.g. automatic tickets for speeding, running red lights, hitting cyclists etc.) would probably help, but for some reason motorists are against that.

I guess the nice thing about robocars is that they can be effectively regulated not to drive at unsafe speeds (though probably people would be mad about that too...).


Red light cameras don't seem to increase safety: https://www.scientificamerican.com/article/red-light-cameras...

Motorists hate them because they're a money grab from local municipalities.


That article fails to mention that Houston also has an issue with very short yellow timers.

https://www.thenewspaper.com/news/22/2232.asp

Let's cut the bullshit. Red light cameras aren't there to increase safety, they're there to increase revenue to the city.


even if true, I'm perfectly fine extracting revenue from reckless drivers.


When they were common here in Phoenix, most people would just throw the ticket directly in the garbage. Law enforcement knew they were pretty unenforceable but if they got a small percentage of suckers I guess it was worth it.


Rear end crashes are much more survivable than t-bones though, and this study doesn't consider pedestrian injuries. To me it sounds like the cameras should also be fining people who are stopping too quickly, as it implies they were going at an unsafe speed for the condition.

Anyway, relatively few people run red lights. Speed cameras would likely have a much higher safety return.


Unless you move them around drivers will just slow down at the camera and then commence speeding. What works is strategically placed cameras that measure average speed over a distance. We have these a few places in Norway and most people stick to the speed limit or a bit below between the cameras.


Reading your first sentence, I immediately thought "this is easy to deal with using checkpointed cameras." And then I read the next part. Curious how well that works out, all told.


it would also be relatively straightforward to require cars to self-report speeding (keep a speed/position log, have it be read out at annual inspection). There are some issues around tunnels and poor GPS geolocation in urban areas, but it would work great for things like highways...


I had that exact thought as I was typing my post! :D

That said, I can see many reasons that is not liked. Amusingly, as you add more and more detection systems to cars for stuff like this, you are backing into autonomous vehicles.


Detecting out of spec driving is a much simpler (and cheaper to solve) problem than autonomous driving, and may have higher safety benefits? Yes there are privacy concerns but the same concerns exist with autonomous driving.


After creating a society where car ownership is a necessary prerequisite to success, it’s challenging to block people from participating.


This is literally within the boundaries of the city of San Francisco, where having a car is only a prerequisite for success if your job is “driving”.

If you’re going to lament the car-focused society we have created, I understand and sympathize emotionally — but you should really take care to relent in those places that are most like the world you would prefer.


I think a key disconnect is that the 'incidents' being complained of most are not collisions, but they can still be disruptive, and indeed dangerous.

Suppose one were to just install brightly painted immobile bollards on streets, and insist they were "driving" just very very slowly. They wouldn't hit anyone. They wouldn't kill anyone. They would piss everyone off.

This disconnect is repeatedly part of where this conversation gets tripped up.

> “Cruise’s safety record is publicly reported and includes having driven millions of miles in an extremely complex urban environment with zero life-threatening injuries or fatalities,” Cruise spokesperson Hannah Lindow told The Chronicle.

> The city’s transportation agencies documented several incidents where driverless cars disrupted Muni service. During the night of Sept. 23, five Cruise cars blocked traffic lanes on Mission Street in Bernal Heights, stalling a Muni bus for 45 minutes. On at least three different occasions, Cruise cars stopped on Muni light-rail tracks, halting service.

https://www.sfchronicle.com/projects/2023/self-driving-cars/

Ok great, they didn't kill or injure people which is nice, but they were _disruptive_ in a way which a human driver would not have been. And critically, we can't _have_ a fact-based conversation around those non-collision incidents because these companies aren't even required to report them:

> But officials said it’s been difficult to assess their effectiveness because companies aren’t required to report unplanned stop incidents — some of which have been captured on social media — when they happen.

> San Francisco officials want the state to require that companies report incidents when they happen.

I've certainly seen cases where self-driving cars behaved in a way that I would consider reckless if a human was at the wheel, but were not collisions, did not result in injury, and I expect will not form part of any statistics reported to any government body ... but I think that's too low a bar.


I've certainly seen cases where self-driving cars behaved in a way that I would consider reckless if a human was at the wheel, but were not collisions, did not result in injury, and I expect will not form part of any statistics reported to any government body

A few weeks ago I saw one blow a red light. Nobody hurt or killed, so it wasn't reported to any responsible agency. It might not have even been tallied by the company, if the car thought there wasn't a problem. But its action was clearly unsafe.

It's hard to get a grip on the problem when the data is so faulty.


Someone posted an example of a self-driving Cruise car that appeared to run a red light https://www.reddit.com/r/sanfrancisco/comments/14wyyzw/just_.... Although in that example, technically the self-driving car was guilty of entering the intersection without sufficient space on the other side, rather than of crossing the limit line while facing a red light.


> beating the top 10% of human drivers will require AGI.

even if it is true, this doesn't matter at all.

if they can beat the median driver and you replaced all drivers with driverless vehicles, they would eliminate whatever bottom percentile driver is responsible for most of the accidents and the world would be vastly safer.

by your own assertion, the 10% best drivers don't matter because they already don't cause accidents.


They also conveniently ignore everything except collisions. For example number of traffic jams caused because your car got confused and decided to stop in the middle of the street.


I care about the overall mean though. Do you know what we call the bottom 10% drivers? Drivers! So long as overall they are better, then get humans off the road, sure the top 10% are worse off, but overall I'm better off as I will never have to deal with that bottom 10%.


Why does it require AGI? That's a bold, unsubstantiated claim. Let me make my own bold claim: Simple common sense reasoning is very close to being solved with our latest LLMs. You would need AGI if you wanted to replace the team developing your self-driving car. A decent LLM, with vision input that's been fine-tuned on driving scenarios, will be more than sufficient for Level 5 driving.


You need AGI for good self-driving because driving requires predicting the actions of other human drivers at an extremely high level. This is second-nature for humans so we barely notice that we are doing it, but it is extraordinarily difficult for non-humans.


I think the reality is somewhere in the middle. You need to be able to accurately predict behavior of humans _following some conventions_, and to be wary of the behavior of humans when they violate those conventions.

An example I saw:

- At the start of a construction area, a guy wearing a hi-viz vest holds a stop sign. A self-driving car stops at the sign.

- The guy _lowers_ the sign a bit while looking over his shoulder down the street towards others on his crew.

At this point a _human_ guesses that the sign is lowered only b/c the guy has seen that the car stopped, and expects the car to stay stopped until some further signal (e.g. a waving gesture, or flipping the sign to show the "slow" side). The human driver understands that stop sign guy is looking to coordinate with someone else nearby. There's a "script" for this kind of interaction.

... but the self-driving car starts moving as soon as the road crew guy lowers the sign. In this case nothing seriously bad happened. But it was not following The Conventions.

This doesn't take full general intelligence perhaps -- but it takes some greater reasoning about what people are doing than the cars seem to have currently, and so sometimes they drive into a zone that the fire department is actively using to fight a fire, and get in the way.


No you don’t. I can feed images of crazy driving scenarios into LLaVA and get reasonable responses. That’s a general purpose LLM with $500 worth of fine tuning running locally on my PC. You should look into what can be done with the current state of the art LLMs. Your intuition for what’s possible is out of date.

If I can do that with open source LLaMA variants, I can only imagine what’s possible if you have an actual annotated dataset of driving scenarios. Imagine a LLaMA model thats been fine tuned for lane selection, AEB, etc.


That's a nice conjecture, we will see in the coming years if it plays out.


You getting six nines of accuracy on that with good latency? Did you watch the “how our large driving model deals with stop signs” from Tesla AI department? Given the multiplicative effect of driving decisions and the weird real world out there, it has be extremely reliable and robust to be a good driver as the miles mount up.


The reason you would insert an LLM into the vision stack is to deal with the weird and unexpected. Tesla’s current stop sign approach is to train a classifier from scratch on thousands of stop signs images. It’s not surprising that architecture can’t deal with stop signs that fall outside the distribution.

LLMs with vision work completely differently. You’re leveraging the world model, built from a terabyte of text data, to aid your classification. The classic example of an image they handle well is a man ironing clothes on the back of a taxi. Where traditional image classifiers wouldn’t have a hope of handling that, vision LLMs describe it with ease.

https://llava.hliu.cc/


This is overcome with super human sensing and reaction time and better visual angles.

i.e. radar based deceleration to avoid accidents already helps many humans avoid collision. It'll help the robot too.

The rest of driving is relatively simple, methodological, and slow.


But also ambiguous and from time to time requiring judgment. Should I let that dumb ass driver go next or pull around them? I agree it’s insane not to allow the automated driving use more sensors than humans have. I wish I had vision that can cut thru rain and glare.


That is a conjecture which does not represent the current state of the technology.


I don't think anybody has demonstrated convincingly that a self-driving car would specifically need AGI to acheive what really matters: statistically better results on a wide range of metrics. I don't expect SDCs to solve trolley problems (or human drivers to solve them either) or deal with truly exceptional situations. To me that's just setting up an unnecessarily high bar.


While this might be true for the truly general case (though I’d bet it’s not), when you have a very constrained operating area it’s a lot less true.

Waymo in Phoenix and current cruise cars in SF seem like good counter examples.

The bar is also a lot lower - human drivers are pretty bad.


I roughly agree. I think the only other piece, that is critical to mention, would be remote humans for support in extremely awkward situations to help get the car get back on track as RLHF.

This requires the car can always come to a safe stop, which I think the LLM-based driver should be very capable of doing.


Automatic emergency braking would be a good first step, it would certainly solve the case in article where the car drives though downed power lines.

I think the logical next step is to have the LLM output the driving path similar to how GPT4 outputs SVGs. Feed in everything you have, raw images, depth maps, VRU positions, nav cues, and ask the LLM output a path.


> I would wager they are far from the median human driver, and absolutely nowhere close to the top 10% of human drivers.

I think you would likely lose that wager. In the information Waymo has published about their safety methodology, they benchmark themselves against an always-attentive driver (this already rules out most accidents), which they still outperform by a large margin. Even a driver who is never distracted doesn't have constant 360 degree vision or near-instant reaction times.


I personally don’t care that much the collision rate. I care about the fatality or injury rate (deaths or injuries per miles driven)

I don’t mind picking an Uber driver if it reduces my chances of getting hurt, even if we are slightly more likely to experience a fender bender.


I hadn't heard this statistic:

> since a small minority of human drivers cause the vast majority of incidents

It's tough to find anything relevant on DDG or google, they come up with information about racial disparities in traffic stops...can you link anything to read about this?


> a small minority of human drivers cause the vast majority of incidents, thereby severely inflating the mean human incident rate.

Could you cite a source for this claim? I have tried many different searches and can't find any support for it.


> a small minority of human drivers cause the vast majority of incidents

Citation needed.


Thankfully driving safer than the mean or median taxi driver seems very achievable.


P75 would be nice


Hmm, I think that, in fact, this comparison is disingenuous! If you want to reduce net collisions, you want to compare to the mean, not median. In that case, once you've produced a self-driving car that's better than the mean, you're saving lives. You don't need to create a self-driving car which is better than every driver, ever.


It doesn't even need to be at the mean. It just needs to drive more safely than the worst drivers who cause fatal crashes, and you're saving lives.


SDC will only move the average upwards if they're taking less safe drivers off the road. If a 40% safe SDC replaces a 50% safe human driver, then you've shifted the average downwards.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: