A cynical person might remark upon the fact that your being a Tesla employee might have some bearing on your position, but I am not such a person.
> In essence, this is not tech that is being used by regular people who have a chance of misusing it.
Now this is true in one sense – people who can't afford a Tesla and then aren't willing to spent an additional $15k on a piece of software, which has (many, many, many) times been described in a highly optimistic to the point of not having a very strict correlation with material reality way by the company's CEO, cannot use the software to drive in a car – and very false in another: _what if someone else's Tesla crashes into me_?
> while Tesla can keep iterating step by step
I could be wrong (this is a genuine statement, please don't take it as a passive aggressive one, it's not intended that way) but doesn't this rely on Tesla first finding a failure, then diagnosing a symptom, writing a fix, etc. The fact is though that this initial failure might be one of several crashes which have occured in a Tesla on AutoPilot, which isn't great?
PS., I have left Tesla but sure I might be biased since I have friends there and worked there for a while.
> I could be wrong (this is a genuine statement, please don't take it as a passive aggressive one, it's not intended that way) but doesn't this rely on Tesla first finding a failure, then diagnosing a symptom, writing a fix, etc. The fact is though that this initial failure might be one of several crashes which have occured in a Tesla on AutoPilot, which isn't great?
Failures are generally user disengagements not a crash. We measure user disengagements, classify them and try to drive the egregious ones to zero. FSD has had one major crash, no injuries that is being investigated by NHTSA, and a few minor bumps (I went in more detail below).
> what if someone else's Tesla crashes into me_?
I think that is a very fair point. It happened when a Uber self driving car crashed and killed a pedestrian which was a major incident in this industry. The problem with DL models is they are unexplainable and we cannot tell when they fail (Though in Uber case it was not exactly DL model failing). Tesla took this risk and has managed fine with no injuries to date. And now the main reason I made this post, the tech keeps getting better, we have this model from Meta that just literally segments everything in an image (even ones you take from your phone). It honestly feels we are leaving the risky DL territory and reaching the "we can't understand how but it just works" territory where you can rely on a Deep Learning to do what you expect it to do.
> In essence, this is not tech that is being used by regular people who have a chance of misusing it.
Now this is true in one sense – people who can't afford a Tesla and then aren't willing to spent an additional $15k on a piece of software, which has (many, many, many) times been described in a highly optimistic to the point of not having a very strict correlation with material reality way by the company's CEO, cannot use the software to drive in a car – and very false in another: _what if someone else's Tesla crashes into me_?
> while Tesla can keep iterating step by step
I could be wrong (this is a genuine statement, please don't take it as a passive aggressive one, it's not intended that way) but doesn't this rely on Tesla first finding a failure, then diagnosing a symptom, writing a fix, etc. The fact is though that this initial failure might be one of several crashes which have occured in a Tesla on AutoPilot, which isn't great?