Hacker News new | past | comments | ask | show | jobs | submit login

There was one second between the driver taking over and the collision, so it was likely a panic reaction to an imminent crash.

Which is fundamentally the problem with self driving technologies. If it isn't 100%, it might just increase the danger: It lures the driver into distraction, because enough kilometres on straight roads and who is going to keep paying attention when a car drives itself...and then boom exception you have 1 second to focus and ingest the situation perfectly or you die.




It's been proven that people are extraordinarily poor drivers for the first few seconds they take over driving from a computer.


I would say any activity that demands focus would have the same pattern, anyone who has driven a car, rode a bike, etc. should be able to tell that it takes a while for you to get into focused mode if you let it drift even for a short while.

It's much more pronounced if you've ever raced a car on a track, rode a fast bike on tricky paths, or even go-karts, if your mind wander for a split second it takes a few seconds of active focusing to get back to the baseline where you enter "flow" again with the activity.

Expecting drivers who let a machine control their machine, getting out of the control feedback loop, to be able to regain focused control over split second decisions is just absurd to me.


not to defend shitty self-driving implementations, BUT if on average they crash less than humans, even if they’re not 100%, society might accept that.


The problem is the accident distribution is really skewed, with a small fraction of drivers contributing most accidents. That means the majority drive a lot better than average, and would likely not put up with a regression in accident rates from what might be a "better than average" system but still not as good as human. I know I wouldn't, I pay attention, never had an accident in 30 years, tried a few FSD implementations including Tesla's and they scared the bejesus out of me.


>The problem is the accident distribution is really skewed, with a small fraction of drivers contributing most accidents.

Even if you take this at face value:

1. such "dangerous" drivers still screw over "safe", by crashing into them. Unless you're willing to take them off the road entirely, you can't invoke "the majority drives better than average" and call it a day. At least in the US, doing this is a non-starter.

2. Driverless systems aren't slightly safer than the average driver, they're significantly safer. For instance waymo claims "81% fewer injury-causing crashes". This effect alone might swamp out the "majority drive a lot better than average" phenomena you talk about. Most drivers might be safer than average, but are they 5x safer than average?


> Driverless systems aren't slightly safer than the average driver, they're significantly safer.

Waymo operates an extremely conservative system that uses human oversight, roof mounted LIDAR, 100% 3D mapped environment in a very limited deployment, etc. This isn't robust to demonstrate Tesla or other self-driving situation, often using just a couple of cameras and a 2D map of roadways.

Tesla has boasted about the value of their system, using the classic per mile argument to justify its advantages. Only the overwhelming bulk of Tesla FSD or autopilot miles are on highways, which statistically is much safer than any other form of auto transport. This number is then compared against the drunk on the unlit country road type grab bag of every other style of driving and declared the victor. And of course even for highway accidents, aside from extreme weather events accidents are often people who purposefully wouldn't be using any assists -- stunt driving and the like.

It all seems very dubious right now. I believe Waymo's stats, I don't remotely believe Tesla's. We'll see what happens as people start using it more in normal driving situations.


Seems robust enough for the scenarios it's applied to--and that's ok. One step forward could be self driving in certain areas.


> At least in the US, doing this is a non-starter.

So far, anyway. I wonder if legislation could move in that direction tho? For example, if you have multiple drunk driving offenses or are convicted of reckless driving they take your license away for a time.


The nature of each failure matters as much as the frequency of failure. The way in which accidents humans get into occur is vastly different from the way in which self driving car accidents occur, even if the end result is pixel for pixel same. Blame, compensation, "peace of mind" of the involved are all affected by this, so it matters a lot.

The solution to this is self driving cars reaching critical mass through some other means[1]. Because then it will become the new normal before anyone realises it.

Personally, I think taxis are going to be the thing that takes them to critical mass regardless of all safety considerations. The driving force behind this will be the fact that in some countries humans driving taxis will soon be economically infeasible, at the prices people can afford. Then, if you agree that Uber et.al are far too entrenched in society for people to just forget about it, you can imagine waymo, robotaxi, etc, picking up.

[1] the "true" solution is to train it to get into the kind of accidents humans get in to, and try to "shape its uncertainty", but this is hard and of course infeasible with real world experiments. Recently, simulations have become extraordinarily better (due to generative models) for robotic/self driving tasks, so there is a slim chance this might change.


I don't know if humans are rational in that way.

Isn't nuclear energy, one of the safest forms of energy overall?

But I still can't shake the fear of it.

And that fear across most of the population has hindered nuclear energy despite all its benefits.

It is going to be interesting to watch how regulation unfolds.


There are two dimensions to "safe":

-how often does it fail,

-what are the consequences of a failure.

When you plot those two and set what is the "acceptable" area, it is skewed. And it is perfectly normal.

https://www.wolterskluwer.com/en/solutions/enablon/bowtie/ex...


There was not taking over, the car was always in driver's control. The driver was using cruise control not anything self driving.


"According to the company’s marketing materials, Xiaomi’s Navigate on Autopilot function can change lanes, speed a car up or down, make turns or brake with minimal human input."

The driver had been flagged multiple times for taking their hands off the steering wheel. The entire story is about the autopilot driving the car, the driver taking over one second before the accident. This is literally in the timeline.

Are you replying to the wrong story? How could you be so blatantly wrong?


You are required to have hands on the steering wheel all the time in China. The driver should be in control all the time.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: