Hacker News new | past | comments | ask | show | jobs | submit login

> I’ve worked in Tesla Autopilot before and saw it hit a mannequin because we never had mannequin in our dataset before ( we might have had it in the data but it was not a part of our ontology for the network to predict).

Incredible. To be clear, your group and the company you work for saw fit to release your beta-quality autopilot onto public roads, as it was hitting human-looking objects in your test labs. Is that what you're admitting here? Did anyone in your group object to this? Were you personally concerned?

Instead of moving fast and breaking things, what if you had not rushed your autopilot out, but waited on technology to improve to the point that your product wouldn't hit a human-looking object it had never seen before?




There’s too much insanity over building new tech these days, and I find a lot of hate directed on Autopilot, to stem from hate directed on Elon. Let’s go over the basics for Autopilot:

1. It costs thousands of dollars a year + 15k one time and it’s very easy to get banned from autopilot for life. They have a 3 strike rule + 1000 miles driven on your Tesla with a good safety before you’re allowed to access autopilot

2. I’ve seen 2 types of customers use autopilot. One is rich dudes, who buy it just to have all bells and whistles. They use it less than once a month and honestly it’s a waste of money for them. The other are passionate early adopters, they regularly make YouTube videos, constantly stress test our tech and are huge contributors of our tech itself. I’d say 10-20% of our users are the latter. The kind of group who don’t use autopilot? The regular old Joe, who perhaps would like Autopilot for some practical use case. It’s too expensive, has a lot of restrictions (like you have to grab the steering once every few minutes or autopilot disengages and then you get banned out of it even if you pay) that it doesn’t make sense for him to buy this tech anyway. In essence, this is not tech that is being used by regular people who have a chance of misusing it. Ever since the Uber self driving crash, heads roll if a self driving car crashes and as an engineer I don’t get any access to Tesla legal but it’s my understanding that in none of the cases filed against Tesla, did they prove Autopilot was active (forget Autopilot being the cause).

So yes, we’re not building new tech that’s killing hundreds, no one has any ethical dilemmas here. We’re building tech that a passionate group of users really want to see succeed and help us do that, and rest of the users just give us money for some reason even though they don’t really use it or trust it. I would frankly be more torn about working in a place like Waymo where the user has almost no control over the car (they don’t even sit at the steering wheel), and they have to solve the problem one shot before releasing it to the public while Tesla can keep iterating step by step (with its passionate user base supporting us and showing love all the way)

Edit: Changed this comment to make it smaller


A cynical person might remark upon the fact that your being a Tesla employee might have some bearing on your position, but I am not such a person.

> In essence, this is not tech that is being used by regular people who have a chance of misusing it.

Now this is true in one sense – people who can't afford a Tesla and then aren't willing to spent an additional $15k on a piece of software, which has (many, many, many) times been described in a highly optimistic to the point of not having a very strict correlation with material reality way by the company's CEO, cannot use the software to drive in a car – and very false in another: _what if someone else's Tesla crashes into me_?

> while Tesla can keep iterating step by step

I could be wrong (this is a genuine statement, please don't take it as a passive aggressive one, it's not intended that way) but doesn't this rely on Tesla first finding a failure, then diagnosing a symptom, writing a fix, etc. The fact is though that this initial failure might be one of several crashes which have occured in a Tesla on AutoPilot, which isn't great?


PS., I have left Tesla but sure I might be biased since I have friends there and worked there for a while.

> I could be wrong (this is a genuine statement, please don't take it as a passive aggressive one, it's not intended that way) but doesn't this rely on Tesla first finding a failure, then diagnosing a symptom, writing a fix, etc. The fact is though that this initial failure might be one of several crashes which have occured in a Tesla on AutoPilot, which isn't great?

Failures are generally user disengagements not a crash. We measure user disengagements, classify them and try to drive the egregious ones to zero. FSD has had one major crash, no injuries that is being investigated by NHTSA, and a few minor bumps (I went in more detail below).

> what if someone else's Tesla crashes into me_?

I think that is a very fair point. It happened when a Uber self driving car crashed and killed a pedestrian which was a major incident in this industry. The problem with DL models is they are unexplainable and we cannot tell when they fail (Though in Uber case it was not exactly DL model failing). Tesla took this risk and has managed fine with no injuries to date. And now the main reason I made this post, the tech keeps getting better, we have this model from Meta that just literally segments everything in an image (even ones you take from your phone). It honestly feels we are leaving the risky DL territory and reaching the "we can't understand how but it just works" territory where you can rely on a Deep Learning to do what you expect it to do.


> Let’s go over the basics for Autopilot:

Go on Pornhub right now (if you're not at work) and search for sex in tesla. You'll find people driving Teslas on public roads while having sex in the driver's seat. The guy touches the steering wheel every so often to keep the car autopilot activated. The videos have been posted over a span of years. Another one was literally posted yesterday. Combined they have 10s of millions of views.

Is this what you mean when you speak of "passion" in Tesla owners? When will these Tesla owners be banned for life? Would you be okay with these people fucking in their Tesla as it drives around your town in broad daylight? Around you and your family?

> I find the hate towards autopilot to always stem from non user and by standers.

There are two other categories:

3: People who have been injured, maimed, or killed in Teslas, and the people who knew and loved them. This applies to me.

4: People who are passionate about robotics, and are disgusted at how Tesla in particular and Elon Musk especially are responsible for eroding public safety in the name of profits and market dominance. This also applies to me.

I appreciate that some people who own Teslas are very "passionate" about the expensive toys they have bought. Toddlers are also just as passionate about their material world. But I really don't care how much they love their cars, what I care about are people who I know are dead, and my field is a joke.

> And then we have bystanders like you with no understanding of what’s actually going on, who want to ban everybody from using Autopilot because you think drunk people are using Autopilot or something.

I have Ph.D. in computer engineering focused on robotics. My dissertation was on dynamic autonomous control. I've built many autonomous vehicles in my time, including cars, forklifts, boats, airplanes, and wheelchairs. I teach graduate students at a top international university. I've worked at and consulted on robotics at top corporations you've heard of. Sorry if you mistook me for a bystander with no understanding of what's going on.

What I want is my community to be safe. What I want is for professionals in my field to take safety seriously, and not release half baked admittedly beta quality software into the wild. That's the craziest part of all of this -- you and Elon and Tesla and all the passionate owners don't even contest that the software and hardware are not ready for the task. We had established protocols for testing autonomous vehicle in public areas in 2007 during the DARPA Urban challenge. Those protocols were designed to keep people safe, and they did. No one died. Tesla threw those protocols out the window, and guess what, people died. This is not saying that autopilot should be banned forever. It's saying you shouldn't move fast and break things, because sometimes those things are people, and sometimes those things are established safety protocols that are there for a reason.

> (Needless to say Tesla will never hit a mannequin or anything like it ever again as 100s of videos by our passionate users have shown. I’ve also seen it avoid a teddy bear on a roller chair that was in the middle of the road for some reason, something definitely not in our training set)

Is it needless to say? Because in 2016 a man was decapitated due to his AP system failing to sense an obstacle, and then it happened again to a second man in 2019 on a newer model, with the same failure mode:

ttps://cdllife.com/2019/feds-say-autopilot-was-engaged-in-fatal-tesla-vs-semi-crash/

Why didn't Tesla fix this beta-level bug in 3 years? Is it fixed today? If not, how long until Tesla AP kills another person? If you had tested your hardware and software in a lab more, would those two people still be alive? Would their families still be whole? Have you reflected on this at all?

Edit: I've responded to your original comment, but it seems you've heavily edited it after the fact. Although, your choice of words about "heads rolling" has incensed me to a degree that I cannot continue this discussion civilly, and have already gone too far. I'm not deleting this because I'd rather get it off my chest. I get that you're not responsible for those deaths, but Elon Musk is from my point of view, and also the general "move fast and break things" attitude is as well.


No idea why you had to tout out that you had a PhD when I was pointing out that you were not a user. Well news flash, I did my PhD in EECS with a focus on Perception and Autonomous Vehicle Applications from a top 4 CS school and so did most people in Autopilot (Most are from Stanford, Berkeley or CMU in Tesla AP), in fact I worked with people who were a part of the 2007 DARPA Urban Challenge who are in Autopilot now and I can assure you they think as deeply about safety as you claim you are. (I did not want to default to an argument of authority, but you made it that).

I do in fact sleep soundly, knowing I don't call engineers murderers on online forums. The 2019 and 2016 case you're talking about engaged Autopilot not FSD Beta (The 2019 crash did it just 10 seconds before the crash). Autopilot is glorified cruise control, it maintains a distance to the car in front of you, does not avoid obstacles, is not meant to break unless there are exceptional circumstances or do anything really beyond following the lane. Independent testing by both the NHTSA and the EURO NCAP authority have given Tesla Autopilot the highest safety rating recorded by any car, so it is the safest cruise control among competitors, but it is a cruise control where crashes happen. Its not FSD.

There has been only 1 case under FSD Beta under investigation by NHTSA, and that occurred in 2021, that is going on in the courts, I do expect Tesla to win that case, but let's see. That case involved damage to the car and no injuries or deaths reported.

Also I edited the comment to make it smaller, I did not see your reply before editing my comment. I can revert it if you wish.


I've calmed down after your "heads will roll" comment so I promise to be civil. To continue. Yes, you should put back your original comment to reduce confusion. I'm fine reading a long comment.

> No idea why you had to tout out that you had a PhD when I was pointing out that you were not a user.

You said:

  And then we have bystanders like you with no understanding of what’s actually going on, who want to ban everybody from using Autopilot because you think drunk people are using Autopilot or something.
Which I took to mean you assumed I was a nonpractitioner who didn't know what I was talking about. I know you work at Tesla, so I know your credentials. Since you seemed so eager to dismiss me as a "bystander", my only point in telling you about my background wasn't to threaten you with it or to assert an unquestionable authority, but to inform you that I have the necessary experience and education to fully understand all the complexities you think are beyond my comprehension. I am not a "bystander", and while I do not own a Tesla (because of course I don't), that doesn't set you up to dismiss my point of view as uninformed. You can communicate to me as a peer, not a "bystander".

> (Most are from Stanford, Berkeley or CMU in Tesla AP)

Absolutely, I've gone to school with some of those people. I've been taught by some of those people. I've also taught some of the people you work with. I don't know who you are, but I know you know better, or at least your colleagues do. Which is why this is so especially painful for me.

> I can assure you they think as deeply about safety as you claim you are.

Do they though? Because... again, you're building a product that allows people to drive around town fucking in their car. I notice you didn't address that at all in your reply. What does your team have to say about that, and when will this be banned? Why did your team release "beta" quality hardware and software onto public streets? Why wasn't the public consulted?

By the tone of your earlier comment about drunk drivers, it seems to imply that you think drunk people driving Teslas is beyond the pale. And yet, what do you say about people fucking in their Teslas? That's happening. Isn't that just as dangerous as drunk driving, if not more so?

It's really easy to say you're thinking deeply about these things, but that seems to be as far as the consideration goes when looking at how your product is being used in reality. You didn't think enough about it that you realized the camera sensors on the AP system would be overwhelmed by a bright white obstruction, and it would cause the AP system to run into it at full speed. You didn't think enough about it to have robust sensing to overcome a single sensor being overwhelmed. Yet you shipped that to the public, and then someone died due to the lack of consideration of that failure mode by your company.

To me, it seems like your company's decisions are based purely on maintaining a competitive edge by being a market leader and aggressively pushing unfinished products onto the general and unsuspecting public. Can you please outline the ethical framework you used to arrive at this decision? Please don't tell me it was "We need this as fast as possible to save as many lives in the future, short term casualties are a necessary evil for the greater good."

> I do in fact sleep soundly, knowing I don't call engineers murderers on online forums.

Thanks for confirming my expectations. I figured as much. I figured you had no problem with the fact that your product decapitated someone, and then your company did literally nothing about it for 3 years, leading to it happening again. For the sake of your sleep, I'm glad you're able to rationalize these decapitations as "it was just a glorified cruise control" as if it's the user's fault. Maybe the first time. But the second one is on your company.

But to be clear I didn't call you personally a murderer; I literally said you're not responsible. But the thing you built is directly responsible. Your team is responsible for releasing these things into communities, which you yourself admit are beta quality. That's something you chose to do. Engineers must be held accountable when the things they engineer hurt people, otherwise they will engineer things that hurt people.

If you haven't realized it yet, this whole feeling I have really about you but your company, so don't take what I'm saying personally, unless in fact you do feel your personal work had contributed to these people's deaths.


Again I did in fact open pornhub when you point it out and that’s Tesla Autopilot not FSD. I do find it beyond the pale that people drunk use FSD, because there are very strict controls for people using FSD. Autopilot is advanced cruise control, apart from regulatory authorities saying it’s safe compared to competitors, Tesla releases quarterly safety reports since 2019 counting the incidents with Autopilot. It is shown to be safer than humans behind the wheel by quite a margin. Critics will point out, that Tesla AP users are old and most accidents are done by teenagers, AP is used on freeways while accidents are more common in city streets etc but in my view that’s still fine. It shows even the AP users are reasonable responsible. The real risk comes when you have FSD, that takes complete control over your car crash and even worse if it crashes while the user is trying to prevent one. I don’t think the latter is possible, because the software cedes control to the user instantaneously. There is a risk of users being too careless with FSD, which is why people we put quite some effort to getting rid of such users quickly. As I mentioned there are no deaths with FSD, just one case in 2021 being investigated by NHTSA.


> Autopilot is advanced cruise control, apart from regulatory authorities saying it’s safe compared to competitors, Tesla releases quarterly safety reports since 2019 counting the incidents with Autopilot.

So what did you do to fix the issue? Why did a second person die in the same exact way as the first person after 3 years? Were you working on the fix at all? Or did you do nothing? Just admit it if you did nothing in response to that decapitation.

You say "It's Autopilot, not FSD" as if that absolves Tesla of anything. Your marketing does not change my opinion of your technologies. It doesn't change that FSD and AP both have glaring technological flaws, doesn't change the fact that FSD is beta-quality hardware and software being tested on the general public, something the public did not agree to. It doesn't change the fact that even though they didn't agree to it, you unilaterally decided it was okay to conduct a beta test involving us. That's a huge ethical problem, and the fact you don't even see it as such blows a hole in your insistence that your colleagues take safety seriously.

It doesn't matter. I understand why you think it absolves you; because you feel that the technology is similar enough to others out there, that it's just an incremental step, and so how can Tesla be held responsible when people make a career out of fucking in their car using that technology? How can Tesla be held responsible when multiple people lose their heads due to poor choices in sensor design? Correct me if I'm off base. Why is the AP/FSD distinction so important to you?

How do you not realize it's your entire company's fault there was no other orthogonal sensor to see the tractor trailer? How do you not realize it's your entire company's fault people out there feel safe enough to use your product to watch Harry Potter or fuck while while flying down the highway, using what you call "glorified cruise control"? That's what you say it is to me, your peer, but to them you've said it was "AutoPilot (TM)". Why didn't you call it "Glorified Cruise Control" or just "Cruise Control"?

That's not on them, that's on you for unleashing this technology on us. People are always going to watch movies and fuck. That they're doing so in your beta-quality robot menace to society is not their fault. Your company specifically conditioned them to think it was okay to do this in a Tesla.

> It is shown to be safer than humans behind the wheel by quite a margin.

The passive voice is doing a lot of work here.

> There is a risk of users being too careless with FSD, which is why people we put quite some effort to getting rid of such users quickly.

Again, this just goes to show how Tesla, in fact, is not concerned about safety and security, but instead are laser focused on market dominance and pushing technology on us as fast as possible. Tesla is a look before you leap, shoot first and ask questions later kind of company. Or as I said, move fast and break things (or in this case, "put quite some effort" to patch them up after the fact).

This is a brand new technology and Tesla is rushing it out to the public as fast as humanly possible, selling it in beta quality before the technology and software is even ready. And you're telling me now that you screen users for bad behavior and ban them after the fact. It just goes to show you're treating this as some grand social experiment you feel you have the right to run on the rest of us.

Yes people are going to watch movies in their cars. They're going to fuck. Yes they will be drunk and asleep. The problem is that you don't seem to care that your customers are using your products to do all these things, and it's over the course of years.


I feel like I’m repeating myself to your constant emotional attacks of trying to make me feel guilt for 2016/ 2019 case so let me try to end it with: you’re right about one thing, AP is similar to other tech out there, it is cruise control with some extra features. If you think it is ethically wrong to put AP in the public, then every car since 2005 at least are a moral hazard, if you believe that fine I don’t. Cruise control is regulated, Tesla passed them with the highest ratings and releases safety reports since 2019 as I already said. Why the distinction is so important, most of Tesla AP doesn’t even use new tech, very little deep learning (just some CV stuff), AP still uses radar like every other car manufacturer out there and AP has a very simple state space planner which most cars use now though most Toyotas use older PID tech.

FSD beta is different, FSD is filled with new DL tech. DL are black box models that work surprisingly well but are not interpretable. They can suddenly output something nonsensical (like bing Chat did, ChatGPT surprisingly hasn’t) and you won’t even know why. There is a risk involved with putting DL based FSD out there, because you don’t know when it will fail. Tesla took that risk. Tesla however to date has had no FSD crashes that involved injuries, had 1 crash that involved the front of a Tesla being significantly damaged (which is being investigated by NHTSA as I already said), and several smaller collisions that have caused scratches on Tesla cars (at which point we promptly ban that user for life, you can see YouTube videos of this). Uber self driving killed a pedestrian, (though the paid QA driver should have been paying attention, it was not really Ubers engineers fault), Tesla actually handled the risk of using DL tech pretty well. It was a real risk, we still have no injuries and the tech keeps getting better. So yes your tiresome moral attacks don’t affect me and I prolly won’t respond again if I just have to repeat myself.


> AP is similar to other tech out there, it is cruise control with some extra features

What concerns me is that even if this is technically true (I have no reason to doubt you, so I'll assume it's correct), it is not marketed this way. First the fact that it's called "AutoPilot" rather than something like cruise control or super cruise or lane assist, etc, or whatever other car manufacturers call their systems and second the misleading statements made by Tesla executives about how FSD is "imminent" and will be available soon.


Yea people are going to hate Tesla because of musk or bc they’re always in the news. I’m not saying that there isn’t valid criticism it just seems that they’re held to a totally different standard than other companies. People really enjoy taking a moral stance against them like you’re doing but it really seems like an overreaction.


As far as I can tell, Tesla was one of the first if not the first to start marking their autonomy as "Auto Pilot" and then subsequently "Full Self Driving (Beta)". Tesla also is unique in trying to implement these technologies using sensors which are not state of the art, a reason which was cost cutting (lasers are too expensive for consumers cars) and naïve (at the time and still today vision technology cannot replace a robust suite of sensors).

Nonetheless they released this beta-technology into the public, without consent. It's caused loss of life, property damage, and on top of it all it's also fraud, because it hasn't even delivered on the promise of full self driving after many years of promises.

I don't think calling them out for this is an overreaction.


> sashank_1509: I find a lot of hate directed on Autopilot, to stem from hate directed on Elon

> ModernMech: disgusted at how Tesla in particular and Elon Musk especially are responsible

Nailed it. It seems like ModernMech didn't even read the post he's replying to.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: