Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That cars are capable of crashing was known before FSD was even a thing. Statistically, you should be much more afraid of the cars driven by human drivers, because they can do everything FSD can, and there's truly not many on the road using FSD.


Statistically, yes, but I think people can be afraid of the lack of direct accountability for AVs. A distracted/drunk driver, for example, is something easy to attribute cause and blame to. And you're not afraid of the car, you're afraid of the potentially negligent, dangerous people driving it.

But an autonomous object that behaves wildly unpredictably simply because of a malfunctioning sensor or a software bug is something that defies known reasoning which you're normally trained to respond to. You can no longer make eye contact with drivers at a crosswalk, instead you can only assume the AV will behave as normal, and if it doesn't and runs you over there's nothing you could have done differently and you were just unlucky and lost the statistics lottery.

I am very uneasy of car manufactures and regulators writing me off as a factored-in statistic. You are basically then allowing vehicles on the road with a deterministic cause of incident.


Perhaps this is just a difference between two valid personal views.

For me, people are the thing that will be wildly unpredictable. I anticipate that autonomous vehicles will eventually make roads seem extremely machine-like & predictable.

I recognise that Tesla's FSD isn't there yet, but I anticipate that it eventually will get there.


The statistic I wonder about is -- self-driving cars will shrink the pool of available donor organs by about 20% ( https://futurism.com/neoscope/self-driving-cars-will-save-li...).

Is there an externality that should be priced in here? How?


Think of it this way... those organs are still being used to save someone's life, they're just doing it by staying in the original body.


In a strict utilitarian analysis, they are saving one person's life, but could probably save several peoples' lives and dramatically improve the quality of life of others.


Ah yes, the "let's kill people to get their organs" argument, which just seems so reasonable...


One utilitarian perspective is that a world where you are at risk of your vital organs being reallocated at any time is such a bad world (deep anxiety for everyone all the time) that organ seizure would never be a reasonable policy choice.

On the other hand:

(1) Maybe this isn't true and we just think organ harvesting is an evil because of our innate status quo bias. Maybe like in Kazuo Ishiguro's 'Never Let Me Go' if there were a class of people who could be harvested at any time they would just ... Accept it and deal with it.

(2) maybe we do live in that world? People continued to travel to, work in, and do business with a place where there was credible evidence of an underclass who was being harvested ( https://en.wikipedia.org/wiki/Organ_harvesting_from_Falun_Go... )

(3) what if there were one person who just LOVED forcible organ harvesting, loved it ten billion times more than every other human on the planet hated it? Morally a utility maximizer would choose to bend to that monster's preference.


You're trying to make this polarized, but it's not about good vs evil. It's an observation that it's likely something in an equilibrium today is likely to shift somewhat in the future. No one is saying "now we need to harvest organs" but "there is going to be a new problem emerging in the future, and that problem will require new solutions".

Even as someone quite interested in autonomous vehicles, and optimistic that they will become extremely feasible, creating dramatic changes in our world in the process, I hadn't until now thought of the organ shortage they might create.

Does it mean we start creating people to harvest their organs? Of course not; please don't be ridiculous. Perhaps it means that biotech companies have yet another gap in medicine to think about tackling, though.


It's certainly not reasonable. But neither is responding to a question about the side effects of fewer donations as a result of car crashes with a dismissal about how the people in the cars will be alive.


> That cars are capable of crashing was known before FSD was even a thing. Statistically, you should be much more afraid of the cars driven by human drivers, because they can do everything FSD can, and there's truly not many on the road using FSD.

It's not about statistics, it's about technical sensing limitations in Teslas leading to blind spots that have provably killed people. Human drivers with such blindness are not even allowed on the road. But Tesla refuses to use state of the art sensing technologies which have been shown to make autonomous cars safer for everybody. Tesla inexplicably and stubbornly refuses to use these sensors, it's really baffling to me. This leads to their cars having limitations in their sensing capabilities, such as trouble detecting large stationary objects. Therefore you see stories often reported of Teslas running into parked vehicles, especially large ones like fire trucks [0]. This problem dates back to 2016 when a man was decapitated because his Tesla couldn't recognize it was about to run into a tractor trailer [1]. In a sane world, that would have been the end of Tesla's experiment and they would have been forced to use safety features we all know they should be using (and Tesla knew or should have known at the time of the accident). But no, the same exact thing happened again in 2019, but this time with a Model 3 sliding under a tractor trailer [2]!

This is literally what driverless cars are supposed to excel at -- don't run into things -- and yet Teslas routinely do this because of limitations in the Tesla sensor stack, limitations we don't need statistics to call out as troubling for technical reasons. At this point, this choice has a confirmed and ever-expanding body count, and Tesla is not working to fix this problem, as it's been reoccurring for over 5 years now (wasn't level 5 autonomous capability predicted within that timespan?); nor are they willing to cease using public spaces to beta test their products, which in my opinion needs to end immediately.

[0]: https://www.msn.com/en-us/news/technology/why-teslas-keep-st...

[1]: https://www.theguardian.com/technology/2016/jun/30/tesla-aut...

[2]: https://electrek.co/2019/03/01/tesla-driver-crash-truck-trai...


It is entirely understandable that Tesla would wing it without lidar. Once one gets over the assumption that their priority is creating a level 5 system.

What Tesla cares about is selling expensive vaporware. If this vaporware depended on an expensive prop, this would cut into cash flow they want to spend on software dev. They've repeatedly said "we can do it without lidar". What they meant is "we cannot afford lidar in our situation." It doesn't look like it's going to work, but it also doesn't look like adding lidar solves the basic issues. So in the end at least they didn't con people into buying useless lidar. Just useless vaporware.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: