Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Very little of what goes into a current-generation self-driving car is based on machine learning [1]. The reason is exactly your point -- algorithmic approaches to self-driving are much safer and more predictable than a machine learning algorithms.

Instead, LIDAR should exactly identify potential obstacles to the self-driving car on the road. The extent to which machine learning is used is to classify whether each obstacle is a pedestrian, bicyclist, another car, or something else. By doing so, the self-driving car can improve its ability to plan, e.g., if it predicts that an obstacle is a pedestrian, it can plan for the event that the pedestrian is considering crossing the road, and can reduce speed accordingly.

However, the only purpose of this reliance on the machine learning classification should be to improve the comfort of the drive (e.g., avoid abrupt braking). I believe we can reasonably expect that within reason, the self-driving car nevertheless maintains an absolute safety guarantee (i.e., it doesn't run into an obstacle). I say "within reason", because of course if a person jumps in front of a fast moving car, there is no way the car can react. I think it is highly unlikely that this is what happened in the accident -- pedestrians typically exercise reasonable precautions when causing the road.

[1] https://www.cs.cmu.edu/~zkolter/pubs/levinson-iv2011.pdf



Actually, because there's a severe shortage of LIDAR sensors (much like video cards & crypto currencies, self driving efforts have outstripped supply by a long shot), machine learning is being used quite broadly in concert with cameras to provide the model of the road ahead of the vehicle.


That is what the comment is saying. Of course the vision stuff is done with machine learning - that is after all the state of the art. But that is a tiny part of the self-driving problem. So you can recognize pedestrians, other cars, lanes, signs, maybe even infer velocity and direction from samples over time. But then the high-level planning phase isn't typically a machine learning model, and so if you record all the state (Uber better do or that's a billion dollar lawsuit right there) you can go back and determine if the high-level logic was faulty, the environment was incomplete etc.


I was responding specifically to "Instead, LIDAR should exactly identify potential obstacles to the self-driving car on the road." - LIDAR isn't economically viable in many self driving car applications (for example: Tesla, TuSimple) right now.


Then your comment is off-topic, because the realm of discussion was explicitly "self-driving cars equipped with LIDAR". Uber's self-driving vehicles are all equipped with LIDAR, as are basically all other prototype fully-autonomous vehicles.


How is it off topic when we're discussing "current-generation self-driving" vehicles?

It's a point of clarification that the originally listed study doesn't take into account, but which could be important to the broader discussion. Especially considering that while this vehicle had LIDAR, the other autonomous vehicle fatality case did not.

> as are basically all other prototype fully-autonomous vehicles

As I pointed out with examples above, no, they are not.


The vehicle involved in the accident has an HDL64 on the roof.


Is it true?

You can get depth sensing (time of flight) 2D camera Orbecc Astra for $150 or 1D laser scanner RPLIDAR for $300. Of course they are probably not suited for automotive, but for me even extra $2000 for self-driving car sensors isn't that much.


But that's the issue: identifying a pedestrian vs a snowman or a mailbox or a cardboard cutout is important when deciding whether to swerve left or right. It's an asymptotic problem: you'll never get 100% identification, and based on that, even the rigid algorithms will make mistakes.

LIDAR is also not perfect when the road is covered in 5 inches of snow and you can't tell where the lanes are. Or at predicting a driver that's going to swerve into your lane because they spilled coffee on their lap or had a stroke.

With erratic input, you will get erratic output. Even the best ML vision algorithm will sometimes produce shit output, which will become input to the actual driving algorithm.


> Or at predicting a driver that's going to swerve into your lane because they spilled coffee on their lap or had a stroke.

Neither are humans, and a self-driving car can react much faster than any human ever could.


I can see when the car in front of me is acting erratic or notice when the driver next to me is talking on their phone and adjust my following distance automatically. I don't think self-driving cars are at that point yet. The rules for driving a car on a road is fairly straightforward - predicting what humans will do -- that's far from trivial and we've had many generations of genetic algorithms working on that problem.


Self-driving cars could compensate for that with reaction time. Think of it this way: you trying to predict what the other driver will do is partly compensating for your lack of reaction time. A self-driving car could, in worst-case scenario, treat the other car as randomly-moving car-shaped object, compute the envelope of its possible moves, and make sure to stay out of it.


Normal cars could do this too. Higher end luxury cars already started using the parking sensors to automatically apply the brakes way before you do if something is in front of the car and approaching fast. If this was really that easy, then we wouldn't have all these accidents reported about self driving cars: the first line of your event loop would just be `if (sensors.front.speed < -10m/s) {brakes.apply()}` and Teslas and Ubers wouldn't hit slow moving objects ever. I suspect that's not really how this works though.


Exactly, with LIDAR the logic isn't very tricky. if something in front, stop.


More than that - if something approaching from the side at intercept velocity, slow to avoid collision.


You're handwaving away the crux of the matter: while for a human, the condition seems straightforward (as we understand that "in front" means "in a set of locations in the near future, determined by a many-dimensional vector set", expressing this in code is nontrivial.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: