Hacker Newsnew | past | comments | ask | show | jobs | submit | adamjcook's commentslogin

There will be significant differences in demands made by the end users of a web browser and a viable CAD system for serious work though.

It is an extremely tough market that many have been shaken out of.

Reliability is a must, particularly on the data representation and exchange front. And those assurances carry ENORMOUS costs. Big money at immediate risk in downstream, physical product if that goes south at any time - spread across multiple manufacturers and other product lifecycle parties.

Lots of workstation compatibility certification work done on CAD kernels.

PLM alone, embedded in many engineering organizations, is a very sticky element for retaining a certain CAD program and certain kernels internally - and it is my understanding that PLM is where the major money is made (not so much on the per seat CAD licensing costs).

New grads are coming out from college today after using CAD company-supplied software in much of their classwork for 4 years.

Many CNC controllers use Parasolid internally for certain visualization and programming operations - machines that will be working on the floor for decades from today.

The fact is that the per seat licensing cost and lock-in tradeoff is simply not a serious issue for many - and such costs have become arguably marginal for even small design houses and manufacturers.


In my view, the higher-level issues with the FSD Beta program are:

- A failure by Tesla to view the system that they are developing as what it really is - a physical safety-critical system and not "an AI". Those two are distinct systems as, with a physical safety-critical systems, the totality of the systems safety components cannot be fully expressed in software - neither initially nor continuously.

- To build on that point, Tesla is not allowing the Operational Design Domain (ODD) via a robust, well-maintained validation process determine the vehicle hardware as the ODD demands it to be. Instead, Tesla is trying to "front run" it (ignore the demands of the ODD) by largely focusing on hardware costs. The tension from failing to recognize that is why Tesla, in part, has a long history of being forced to (somewhat clandestinely) change the relevant sensor and compute hardware on their vehicles while promising to "solve FSD" (whatever that means) by the end of every year since around 2015 or so.


> and not "an AI"

But what is AI? If it's just "artificial intelligence", it effectively includes all programming with if/then logic gates based on program input.


> it effectively includes all programming with if/then logic gates based on program input.

And? You think that is the totality of a physical safety critical system?


I'm pointing out that calling anything "AI" is both pointless and meaningless. It's a buzzword for board members and shareholders to throw around, since they refer to it as the latest LLM technology, while the phrase just means any complex business logic generated by a program.


It’s generally accepted to mean the use of neural networks which Tesla is obviously using. Good luck even identifying a stop sign with “complex business logic” or “if/else”


Most important road signs have rather distinct shapes, standardized sizes and are angled towards oncoming traffic. Having an object with known shape aligned almost perfectly towards the camera is basically the best case for many primitive object detection algorithms.


True, but it’s equally important that a self-driving car be able to recognize a stop sign that is bent from a previous accident and facing an arbitrary angle (as well as one that is angled towards the car’s lane but applies to a different road).


And stop signs that have been altered in some way. For example, rural stop signs that are peppered with holes from pot shots must still be recognized. Snowy stop signs with the bottom half obscured by accumulated drift. Signs with a non-red sticker reading “WAR” placed below the word “STOP”.

And that’s not even getting into cases where you conditionally act like there’s a stop sign. The city of Houghton,MI has major streets along the side of a hill, and minor streets going up and down the hill. Every winter, sand is put down for traction, and every spring it is cleaned away. If there’s a late-season snowstorm after the spring cleaning, cars going downhill on the minor streets physically cannot stop, so everybody on the major streets looks uphill before crossing.

Short of location-dependent fine-tuned models, I’m not sure how machine learning could replicate the logic of “if snowy in late spring, grant right-of-way to cars headed downhill”.


They're "artificial neural networks" and it would seem to me it's recognizing stop signs by comparing them to images of stop signs. So I tend to lean toward "AI" is the latest "buzzword". I think in truth it's more akin to a search engine reacting to inputs, but from sensor data, than anything close to real "intelligence" of any kind.

I can see how it appears to intelligent, but it lacks reasoning, creativity, and critical thinking.


If I had a fully functional self-driving car, I wouldn't be lamenting that it wasn't creative enough


“Creative” doesn’t necessarily mean “generating new behavior”, but can also mean “generating new hypotheses”. Suppose you see a group of young kids playing in a yard. One tosses a ball up into the air, and the rest run towards it. The first to reach the ball throws it back into the air, and the rest run toward it again.

It requires creativity to recognize the rules of the game as “try to be the first to reach the ball”, to recognize that the thrower may not have time to carefully aim, and that the others might chase the ball regardless of its location. Only if all three of those creative leaps are made, then logical deduction can take over to conclude “if the ball goes in front of me, stop before a kid does the same”.


That "comparison YouTube video" is absurd and dangerous, because, at minimum...

A Level 4-capable vehicle (a Waymo vehicle) is an incomparably different system than a Level 2-capable vehicle (a vehicle equipped with FSD Beta).

The Waymo vehicle has a design intent such that there is no human driver fallback requirement within their vehicle's Operational Design Domain (ODD).

The Tesla vehicle has a de facto design intent such that the human driver is the fallback at all times - which makes the control relationship between the human driver and the automated system exactly the same as if the Tesla vehicle was equipped with no automation at all.

The risk profiles and failure mode analyses are Night and Day different and, therefore, the validation traits between these two vehicle are Night and Day different.

But, more than that, there are no guarantees that:

- The human driver of the FSD Beta-active vehicle shown in that video did not manipulate any of the vehicle controls out-of-view that clandestinely assisted the vehicle without deactivating the automated system (possible and inherent Human Factors safety issues with that aside); and

- The creators of this comparison video did not select the most visually-performant run out of several attempts.

Naturally, since we are dealing with safety-critical systems here, assumptions of "positive safety" are not compatible with any internal or external analysis.

Lastly, I have yet to see a video involving FSD Beta where indirect and "unseen" systems safety issues were satisfied. Appearances can be deceiving and deadly with safety-critical systems.


>The human driver of the FSD Beta-active vehicle shown in that video did not manipulate any of the vehicle controls out-of-view that clandestinely assisted the vehicle without deactivating the automated system (possible and inherent Human Factors safety issues with that aside); and

that's why i included Marques Brownlee's demo.


Respectfully, no FSD Beta video can add anything of safety value in evaluating these systems - and the only thing that these videos do these days is add a sense of complacency in most or all FSD Beta users.

Videos and personal experiences can only reveal safety issues, never positive progress.

Marques (and every other FSD Beta user) is not read into a would-be systems safety lifecycle for this safety-critical systems that Tesla should be maintaining.

Marques (and every other FSD Beta user) is entirely blind to that.

It is a complete Black Box to them.

Therefore, the assessments made are always subjective and are almost entirely based on emotions and appearances (and other hand-wavy, ill-defined aspects such as "interventions" or "disengagements") rather than a complete accounting of all relevant systems safety components.

Systems safety is about exhaustively asking questions and then exhaustively seeking quantifiable answers to those questions against established failure modes and in the context of the system and every other system that interacts with it (including the human driver in the case of a FSD Beta-equipped vehicle as a Level 2-capable vehicle).

That is the whole point of a company maintaining a robust systems safety lifecycle - to convert subjective opinions of system characteristics into quantifiable understanding.

Tesla is not maintaining that.

Throughout the video, there are several places were Marques states "he thinks" or "he believes" or "that looks good" and such comments are also prevalent in the YouTube comments attached to the video.

These are safety-critical systems where an unhandled failure can readily result in an injury or death.

Responsible systems developers need something far more quantifiable than blind opinions of run-of-the-mill consumers.

That FSD Beta-active vehicles do not appear to "run into things as often" on the roadway is not a complete evaluation of the system.

There are also very real indirect and "unseen" safety components that are inherently part of the public roadways that must also be accounted for.

For what it is worth, I touched on some examples of this recently in a Mastodon thread: https://elk.zone/mastodon.social/@adamjcook/1101629508444173...


> It is a complete Black Box to them.

If you think it's a black box to the Tesla drivers, how is it not a black box to the Waymo customers in the back seat of these cars? Or how are you evaluating Tesla vs Waymo if not by how how humans subjectively feel each system is performing?

If you mean to the teams, you cannot assume that Waymo's systems are any less of a black box than Tesla's systems. And even then, they're not much of a black box at all, besides the actual object detection, as both Waymo and Tesla still have most decision-making in regular logic-based code, not machine learning algorithms; and when they do, such as with "do I need to get over now to make the next turn", it's still fed back into the "business logic" that decides what to do and thus logged and audited when it's sent back to HQ.


Waymo customers in the back seats of cars are not testers or operators; they are cargo. These are fundamentally different roles with fundamentally different requirements with respect to the safety lifecycle.


So are we just as much in the dark about how much progress Waymo could be making? given "Videos and personal experiences can only reveal safety issues, never positive progress" is the argument and yet Waymo doesn't exactly give us access to their bigquery to perform our own qualitative analysis.


Yes. As a member of the general populace you have no idea how much progress Waymo is making and are unqualified to "test" their systems. However, you are not being asked to "test" their systems and you are not involved in the operation of the systems, so the point is moot. This is in contrast to Tesla where you "are" both of those things which is the problem.

Also, I just realized that the systems safety engineer you responded to has also posted a reply, so you should look to their statement for a more in-depth analysis as they are a expert on the subject.


I think it’s unfair to say that you’re not testing Waymo’s systems when they allow people to get in them to take trips. And while if it has a problem, it’ll try to pull over on the side of the road, it can also stop in the middle of the road if it doesn’t think it can pull over, which can be a safety problem on even 35/45mph zones.


> If you think it's a black box to the Tesla drivers, how is it not a black box to the Waymo customers in the back seat of these cars?

The general public (as a vehicle occupant) only interacts with a Waymo vehicle as a passenger with no vehicle control responsibilities.

That is in stark contrast from the integral human-machine relationship that exists in a Tesla vehicle.

> If you mean to the teams, you cannot assume that Waymo's systems are any less of a black box than Tesla's systems.

True.

Waymo's internal processes are a Black Box to me (and anyone external to Waymo) because we are not read into their systems safety lifecycle, whatever it may be.

Hopefully and presumably, Waymo is maintaining a Safety Management System (SMS) with their test operators and other internal teams, as they have claimed in the past.

Of course, since there is little-to-no regulatory oversight of this in the US (at the moment, perhaps)... Waymo's "word" is really the only thing the public has to go by.

That is not acceptable, in my view, in constructing a novel transportation system that ultimately relies on public trust to be economically viable... but that is the regulatory reality right now.

In the case of Tesla, it is definitive that they are not maintaining a SMS, in large part, because Tesla's (untrained) customers utilizing the system cannot be sufficiently read into a lifecycle. There is simply no way to do that without maintaining a highly-controlled, continuous relationship with the test operator.

For example, the "release notes" (sprinkled with some Tweets from Musk) that Tesla issues with some of the FSD Beta updates are simply too puny relative to the complexity of not only the vehicle system, but the larger complexity of the roadway.

> And even then, they're not much of a black box at all, besides the actual object detection, as both Waymo and Tesla still have most decision-making in regular logic-based code, not machine learning algorithms; and when they do, such as with "do I need to get over now to make the next turn", it's still fed back into the "business logic" that decides what to do and thus logged and audited when it's sent back to HQ.

As I stated elsewhere, these are physical safety-critical systems where the totality of the systems safety components cannot be expressed in software alone.

Remote vehicle telemetry is valuable of course, but as a tool to serve the validation process... not the validation process itself.

Vehicle telemetry cannot be a complete accounting of all of the interacting systems safety components involved here.

For that, like all other safety-critical systems, one needs exhaustive, controlled and physical validation.


That’s a lot of words, but at the end of the day the NHTSA allows FSD beta on the roads. Someday in the distant future Tesla will likely use the data they’ve collected to make statistical inferences to regulators about the safety of the system as a whole. Design intent doesn’t matter now, and won’t matter in the future when the system is retroactively validated for level 5.


yah, I'm amazed how humongous piles of paper analyses are considered more safe than evidence that stuff works in the real world. I get statistics are hard. It took 30 years to bust p-value hacking and much longer to get Bayes inference widely accepted. some of us would.love to _understand_ how neural net based automation makes it's decisions. how do you deal with the observable fact that these robots indeed in the real world make the right decisions more often than humans? but we can't explain why, because we didn't construct the robots controls, at least not in the traditional way?


Speculation. Lynching was also allowed at one point, and then it wasn't


Did you really just compare Tesla’s driver assist to lynching?


Doesn't matter - something that used to be common, used to kill people, and eventually society decided it shouldn't be allowed. Asbestos might be better comparison.

Point is, you are assuming a certain legal outcome, and there is no reason to believe it won't go the opposite way


In my book, Tesla gets a bad rap for providing an unvalidated, should-be safety-critical system to run-of-the-mill consumers without an accompanying Safety Management System.

The fact that they profit handsomely off this structurally dangerous wrongdoing is just the cherry on top.

And, without robustly maintaining a systems safety lifecycle (which, by necessity, must incorporate a Safety Management System)... no technical progress is quantifiable by anyone, including Tesla.

Tesla effectively throws a system over-the-wall and throws it all on the human driver and on the public.


People keep using their Necessary Captial Words to say what we have is balderdash. Another post guffawes that there isn't Operational Design Domain.

I agree that where we are is balderdash, and dishonest, unclear about itself in extreme & lacking. But I detest this My Paradigm Is Required phrasing. Say what you think please! Browbeating the topic with particular/spefific engineering dogmatisms is unhelpful, unclear: it leans on authority, while also not having assertable claims anyone else could contest. This kind of hollow criticism degrades.


Nothing you said is wrong, I just really, truly don’t care. FSD makes fewer mistakes on routes I drive than it did a year ago, so whatever they’re doing seems to be working. Complaining that people like me shouldn’t be allowed to purchase safety critical software just deepens my resolve to keep using it.


> Nothing you said is wrong, I just really, truly don’t care. ... Complaining that people like me shouldn’t be allowed to purchase safety critical software just deepens my resolve to keep using it.

You do understand that people are concerned about setting these cars free on public roads, right, where they can kill unwilling participants? I don't think the concern here is about your freedom to choke on billionaire boots.


Regulators (NHTSA in the US specifically, but other countries as well) continue to allow it.

Elon is a bit of a monster (personal opinion), but regulators have the final say. When they force FSD to be pulled, then there is weight behind the argument, but this hasn’t happened and that sends signal.

You already share the road with inattentive drivers and drunks, so the risk acceptance/appetite benchmark has been set. FSD is arguably better than both cohorts, considering number of deaths caused.

As always, nuance. More people will die in the next ~10 minutes from traffic deaths than have ever been attributed to Tesla’s autopilot or FSD (33 total, as of this comment).

https://www.tesladeaths.com/

(Again, not a billionaire simp, just a rationalist; booo on Elon, but props to Tesla engineers in the aggregate; personally, I hope he gets blown out the door and JB Straubel takes over as CEO)


> You already share the road with inattentive drivers and drunks...

The second is a crime and I believe the first is a misdemeanor. Getting caught in either scenario repeatedly will cause you to lose your license.

So we may share the road with dangerous drivers, but we don't accept it. So it isn't grounds to accept more danger.

Really this line of argument is always wrong. The presence of danger should make you less comfortable accepting additional danger, not more. It's not like they cancel out, they sum. (One might say that mature and accessible self driving technology would take these drivers off the road, but the situation today is immature technology and high end vehicles.)

As a self described rationalist, I think you should take another look at that - to me, it reads like you're saying that because it doesn't feel like we're taking on additional marginal risk in comparison to the risks we've already taken on, we don't need to worry about how we're actually doing so, so I was caught a bit off guard when you said you were a rationalist.


Also, is it essentially the same "driver" in all those Teslas? If one driver was responsible for 33 deaths over a couple of years, they'd have lost their license long ago!


> FSD is arguably better than both cohorts, considering number of deaths caused.

I'm skeptical of any digital technology use case where the analogies/comparisons tend to be:

a) to non-digital things, when there are perfectly decent digital comparisons to be made (e.g., autopilot in the airline industry)

b) to conspicuous non-digital things, here drunk or inattentive humans

c) more or less a $small-human-scale-X improvement on the performance/efficiency/safety of the non-digital thing, often implying a future $unspecified-X improvement that, say, Moore's law would suggest (not saying you're doing the latter here, btw)

Those last two especially. I'm just imagining someone hawking a newfangled realtime audio system, claiming that it outperforms hand-punching a player-piano score. It's a silly example for sure; but on a regular basis on HN I read how Bitcoin is no worse than fiat in terms of global energy use, how crypto scam rates are roughly equivalent to wildcat banks in the old west (and hey, we eventually improved on those, so...), and how many more humans cause car deaths than these intractable systems which are "arguably" better than drunks.


> Regulators continue to allow it, and their opinion > internet randos.

This is true.

> When they force FSD to be pulled, then there is weight behind the argument.

Well, I suppose that it is pretty hard to dispute this, but it should be recognized that the NHTSA (the theoretical regulator of vehicle and highway safety in the US) is extremely weak and virtually non-existent.

The NHTSA lacks anything close to the skill sets necessary to independently, proactively and robustly scrutinize even rudimentary mechanical issues (which has been confirmed by several USDOT OIG reports over the years).

With opaque, complex automated systems and software... the NHTSA stands no chance.

The NHTSA lacks the internal skill sets to understand any of the comments that I have made elsewhere on this post.

Again, you are not wrong per se, but again, it should be recognized that the NHTSA is concerned primarily with establishing plausible deniability to protect the agency and with headlines rather than protecting the public with solid regulatory processes and oversight.

(Coincidentally enough, yet another USDOT OIG report was buried in a Friday afternoon release: https://www.autoblog.com/2023/06/02/nhtsa-fails-to-meet-inte.... I kid you not, every four years or so the USDOT OIG releases another critical report on the NHTSA that focused on issues not rectified in the previous report. It is like Groundhog Day.)

> You already share the road with inattentive drivers and drunks, so the risk acceptance benchmark has been set.

This is true.

Because the US public does not demand change and because roadway deaths are high, but distributed across time and space... the NHTSA remains weak and overall US transportation policy remains dreadfully poor.

> FSD is arguably better than both cohorts, considering number of deaths caused.

Unquantifiable.

There is no way to accurately and independently quantify the downstream safety impact of FSD Beta.

Sure, perhaps the NHTSA believes that (because they must given their structural issues), but we should recognize why such assumptions are flawed.

> More people will die in the next few minutes from traffic deaths than have ever been attributed to Tesla’s autopilot or FSD (33 total, as of this comment).

There is the possibility for "indirect" incidents caused by FSD Beta where the FSD Beta-active vehicle is never physically impacted.

We cannot assume that those do not exist.

And we also cannot assume that the media is able to pick up on every Tesla vehicle-related incident - even as well-followed as Tesla, the company, is.

In fact, other than the automaker's word, in many cases, safety investigators like those from the NTSB cannot independently and forensically establish specific root causes.


Yet they allow human beings on public roads killing unwilling participants. There seems to be a two tier system at play - unaided humans killing thousands upon thousands to the extent it’s normal vs humans using a tool incorrectly and killing … a handful?

My experience with FSD is it’s a terrible autonomous system and anyone who uses it as such is a fool, and a fool with a car is dangerous no matter what. However the joint probability of my driving awareness and skill and the cars combined is greater than mine alone. I’ve had it suddenly brake when a car I didn’t notice was drifting into my lane and had it not I would have been in an accident. Likewise it made mistakes and I took control.

I personally don’t care if it ever is able to take me from point A to point B without my attention or assistance. I value its ability to navigate with my assistance especially on long trips, reducing my overall fatigue and taking me through confusing sections of urban interstate without errors - when I always make a wrong turn. The fact it’s 360 aware and I’m not and it’s indefatigable and I’m not is valuable.

In the last year it’s become remarkably more capable. I don’t know if they can continue this rate of improvement but if they can it’s about as good as I would expect from todays technology on a consumer car. That’s a decent bar for me. I think it’s also something valuable on the roads - as a driver assistance tool. The folks who turn it on and get in the backseat would do something just as bone headed without FSD. Rather I notice enormous number of Tesla cars on the road not being drive by total idiots, and presumably quite a few using FSD without issue. And, as I assert above, I believe the joint probability of the aware driver with FSD having an accident is lower than either alone.

I don’t care what hyperbole a bipolar nut job spouts, but I do appreciate him setting an unreasonable goal and failing halfway there while the rest of the world seems content with stagnating. Tesla created the EV movement in the mainstream, SpaceX created the space revival we are experiencing.

Fwiw, I think the choking on billionaire boots comment is not a particularly high value contribution.


    I do appreciate him setting an unreasonable goal and failing halfway there while the rest of the world seems content with stagnating.
This is underrated. I write this a someone who is half appalled and half amazed by Musk. How much that Tesla has already achieved in their self-driving efforts is incredible. Leave aside Musk's vision, it also means they must have an absolutely stellar engineering team working on this problem. Creating and maintaining this team is a huge feat by itself.


> right, where they can kill unwilling participants?

I feel the same way about teenagers, grandmas, and grandpas, and yet here we are.


Which is why many countries have a 18 year limit for driving, plus in my country there are people pushing for mandatory regular driving tests for old people.


If FSD is statistically safer than the 18 year old (or new driver of any age), is it ethical to knowingly cause more death by forcing the new driver to drive instead of allowing them to use FSD?


You seem to forget that Grandpas have rights, for example to live their life, and. being able to get from A to B is part of that.

FSD does not have rights.


Once grandpa's vision, reaction time, etc. declines to the point he becomes unsafe he should be losing his license anyways. That's the law today, though it is not enforced as strictly as it should be. In my model grandpa maintains the ability to travel by using FSD and everyone is safer, in your model he loses his license and is stuck at home or more people die.


Those concerns are entirely unwarranted, boarding on hysteria. FSD has an adequate real world safety record.


FSD has no safety record as Tesla does not release their datasets for analysis by third partys, research institutes, or government regulators. In fact, they have deliberately misclassified FSD to not be a autonomous vehicle system, despite repeatedly indicating that it is intended to be and currently is a autonomous vehicle, to avoid mandatory California DMV reporting requirements [1]. The only "safety record" available is published by Tesla themselves with no access to the underlying data which is the entity with the greatest financial conflict of interest. You might as well ask Ford how safe the Pinto was or VW how clean their diesel was. There is literally no reason to believe those safety reports, in fact, you should probably anti-believe them as this is the standard pattern of manufacturers of unsafe/inadequate systems.

[1] https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...


If Tesla had an adequate safety record, they would back their claims by offering liability insurance when self driving is enabled.

The fact that Tesla does not put their money where their mouth is says all you need to know.

Wake me up when someone is willing to take on liability. Until then automated driving does not exist.


FSD isn’t to the point where it can safely operate without a driver monitoring it. You’re arguing against allowing it on the roads with a safety driver on the basis that FSD isn’t ready to be operated without a safety driver. That makes zero sense.

FSD with a safety driver does not pose a threat to public safety. That’s not coming from me, it’s coming from the NHTSA.


What does FSD stand for?

I would have thought that something called Full Self Driving would be capable of fully driving by itself.

Sarcasm aside, that’s how most people use it.

The NHTSA hasn’t regulated it, but that doesn’t mean it’s an endorsement. It just means that they haven’t found anything within their legislative mandate that they can regulate here. US agencies are largely reactive, and I’m not sure I trust them with my life to get this right.


Most people use FSD as a driver assist because the software will permanently ban you from the beta program if you don’t keep your hands on the wheel.

No one who has used the software for more than a minute has any misconceptions about FSD.


> No one who has used the software for more than a minute has any misconceptions about FSD.

But people who have heard the name might. Honestly, it's a ridiculous name. Just "Self-driving" might be arguable. What are they going to call it when it's really fully self driving? "Literally Fully Self Driving Totally Serious"


I have no problem allowing it on the road. I do have a problem calling something that does not drive itself “self driving”. Much less FULL self driving.

Although, I guess my bigger issue is actually with the government that takes no action about false advertising.


"Full" can mean a lot of things. You chose to interpret it as L5, but to me it clearly compares itself to AP and other lesser ADAS systems which has a much smaller ODD. FSD has a nearly full ODD (it doesn't do parking and reversing yet). I understand you might get the impression that Musk means L5, and clearly that is the long term goal, but that doesn't mean FSD has to be L5. Anyway, arguing about a name is pointless. There are literally thousands of videos online where someone who's about to purchase a product for $15,000 can see exactly what they will get. The "fear" that grown adults will purchase something for $15,000 purely based on a vague name without reading any disclaimers or read/watch a single review is disingenuous and purely false concern.


"Adequate" is an interesting choice of a word when describing something's safety record.


Adequate to many is “Kills fewer people than the average human driver.”

And yet, the average driver kills 0 people. It’s in the far margins where deaths occur - a fraction of a fraction of a percent.

This is in comparison to the arguably singular entity “Tesla Autopilot” which has already killed several.


Tesla autopilot replaces a lot more than one human driver, so this is a mathematically nonsensical argument.


So does a bus driver. Or a pilot. Or an engineer.

And we hold them responsible for the humans in their care.


We don't. 42K people are killed in the US every year in automobile accidents. Yet humans are allowed to keep driving and killing people despite their massive flaws.


Individual humans have their license revoked and sometimes even face manslaughter charges if they cause too much death on the road. FSD is like one mind in control of an entire fleet. It's mistakes are amplified relative to a lone human


The problem is, you don't hear the counterargument from the folks for whom it didn't work, because they're dead.


Pretty sure you do - any indication that even regular auto-steer+ACC is engaged during a fatal incident is cause to sound the alarm bells and put out a headline for "Tesla Autopilot killed this person!". The first fatal accident involving FSD Beta is going to be a multi-week long charade of media attention and pressure on regulators to exclaim "despite this being 5x safer, you must gate this until it's literally infallible".



> > The first fatal accident involving FSD Beta

> first?

According to your own citation, there are no deaths where FSD Beta has been alleged to be involved in any way.


So a Tesla driving off a cliff, is OK because it wasn't using the "FSD Beta" or whatever that idiot is marketing as the latest buzz?

Shouldn't the public be already outraging from the existing deaths? what's the difference... There will not be any change with an FSD death. As there wasn't with the previous deaths. Most people just don't care.


Is it okay, you ask. Yes! Yes, it is okay to be factually accurate. It's not okay to be factually inaccurate in order to reject evidence which contradicts your position.

> So a Tesla driving off a cliff

This is a lie. There is no evidence that any Tesla vehicle has ever driven itself off a cliff.

Perhaps you are thinking about media reports from a few months ago where a human drove their vehicle off a cliff? If so, it is concerning that you misremembered and/or misrepresented this news event in this way. This news event was of a human driving a vehicle off a cliff. That vehicle happened to be a Tesla — noteworthy only because experts were saying that the chassis did an astoundingly good job of protecting its occupants.

No use of driving aids was ever alleged.

> Shouldn't the public be already outraging from the existing deaths?

Yes. A million people are killed by vehicles driven by humans every year. A million people dead. Every year. This is an outrage.

As for deaths where use of Tesla Autopilot is indicated or alleged, I find them no more outrageous than deaths involving any manufacturer's brand name for lane centering/adaptive cruise control. These features do not lower the driver's responsibility to keep the vehicle under control. And they do not diminish the accountability of the driver in the event of a collision.


Isn't that an empty criteria? Doesn't the system disengage before an accident by design?


Tesla says “ we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed” https://www.tesla.com/VehicleSafetyReport

But even without that, there are no allegations of such FSD beta related fatal or incapacitating incidents, who would say it even if the beta put the human in harm’s way 10, 15 seconds earlier. The black box in the car includes both the dash cam camera and whether auto steer is activated, so any fatal crash would still leave such evidence of Beta being active.


Every day thousands upon thousands of new teenagers start driving. One day, maybe today, the self driving technology will be safer than those teenagers and it will be a moral crime to put them in control of the vehicle. The same applies for the elderly.


> According to Moravy, vehicle assembly processes haven’t changed in the last 100 years, which he says is “really silly.

Where to start with this article?

Here is a good spot I guess.

First off, Moravy is wrong.

If anyone thinks that auto manufacturing and the "vehicle assembly process" has not substantially changed in the last 100 years, then they are totally ignorant of the industry and of the exacting details associated with something as complex as automotive manufacturing.

The other vital thing that this article fails to mention at all is how manufacturing is shaped by the larger concerns of the product lifecycle - which is (or should be) the actual "product" that leaves the factory.

"The car" is just a hunk of metal that embodies the product lifecycle - which can be competitively unique from manufacturer-to-manufacturer.

One cannot talk myopically about "costs" and whatever happens on the manufacturing floor without bringing in the total concerns of the product lifecycle (i.e. service, end-of-life, market requirements).

That is difficult to do in an article because the total size and complexity of each automaker's product lifecycle is immense (and largely unknown externally from the automaker in question), but it must be done.

> If something goes wrong in final assembly, you block the whole line and you end up with buffering in between.”

Which is how, fundamentally or in part fundamentally, the Toyota Production System works - and it is difficult to argue with the quality results at Toyota.

Honestly, I am not seeing much of a difference here overall.

There are various component assembly lines that do run outside and "in parallel" with the General Assembly lines at incumbent automakers.

I am not even sure how this is debatable.

> “However, there are some quality-related risks involved, such as potential gaps in fit and finish,” warns Pischalnikov. (snip) “The reason that’s always been done is for color consistency, to ensure that there’s a perfect match between the doors and the rest of the car body,” Prasad points out. “By not having to assemble, disassemble and reassemble vehicles, you can reduce production costs and eliminate waste.

Which are quality control aspects that Tesla still seemingly struggles with, near as I can tell.

I am all for encouraging automakers to explore new methods of automotive manufacturing and BEV production will present significant opportunities to do so, but this article from Assembly Magazine is, at the very least, incomplete.


100% agree with you, having been in and around automotive factories for years. Tesla is really capitalizing on two things here:

1) Tesla customers seem extraordinarily willing to overlook production defects and servicing/repair issues compared to customers of other OEMs. This lets them get away with lower manufacturing quality tolerances than they normally would, as noted in the article mentioning the water ingress issues. Seems like they're going to further capitalize on this customer tolerance with the paint process changes.

2) As a result of only building BEVs with no legacy support requirements, Tesla is able to design the manufacturing process and vehicles themselves to be more efficient to assemble. It's a definite competitive advantage today and that's another thing some of their intended changes here will try to capitalize on. The question to me is how long it will be until the traditional OEMs catch up here.


> As a result of only building BEVs with no legacy support requirements, Tesla is able to design the manufacturing process and vehicles themselves to be more efficient to assemble.

While automakers reuse factories, they have no qualms about building new ones and closing old. I have family that works on assembly lines and every couple years they get an offer to move to New Mexico, Arizona, Tennessee, Georgia or wherever the new factory is being built.

What is the legacy albatross that hangs around ICE manufacturer necks?


A combination of supply chains, unions, dealers (they hate BEVs), support requirements for existing vehicles, cannibalization of existing vehicle lines, and (at the moment) cost of capital.


> The other vital thing that this article fails to mention at all is how manufacturing is shaped by the larger concerns of the product lifecycle - which is (or should be) the actual "product" that leaves the factory.

> "The car" is just a hunk of metal that embodies the product lifecycle - which can be competitively unique from manufacturer-to-manufacturer.

The thing is, Tesla doesn't have a concept about product lifecycle once the car rolls off the line. They don't care about tuners and tinkerers, they don't care about aftermarket sales (e.g. people realizing they might want a trailer hitch), they don't care about people ending in accidents (or why else does it need months for spare parts for a body shop), they don't care about maintenance (because let's be real, unless you get a lemon car, all you'll need to do for 10-15 years is brake and tire changes!) and no one forces them to do so either, so they do what makes the most profit for them: easy assembly trumps everything, and not having much of a dealer/service station network means you don't have to invest money into building it and schmoozing up dealers' arses for incentives.

Their entire structure is fundamentally different from conventional car makers. Add on top what the Chinese are doing, and the conventionals are headed for some really dark times.


The sad thing is, I ran into actual mechanical engineers who failed to see these issues with Tesla's approach, and hype, showing a shocking lack of knowledge about mass manufacturing. So the Tesla hype is working, even if it is mostly unfounded in reality.


This article is a learning example on how submarine advertising and marketing looks like. This is how you write a "legitimate article" that spreads disinformation and praises a corporation.


I am tired of Tesla lying about its vehicles being capable of "driving themselves" and selling a "Full Self-Driving" product that is anything but - preying on the public's ignorance and substantially harming public safety.

I am dog tired of that.

No, other automakers are definitely not saints... but Tesla has embraced uniquely extreme wrongdoings throughout its history and this Handelsblatt story (which is still coming out in stages) fits Tesla like a glove.


Not to challenge your comment, but we have been converting office high-rises into apartments all over Detroit seemingly quite successfully and tastefully - with buildings built in the 1920s no less.

I am living in a converted office high-rise (built in 1914) right now in fact: https://en.m.wikipedia.org/wiki/Kales_Building

And there is another in the middle of conversion right next door: https://en.m.wikipedia.org/wiki/United_Artists_Theatre_Build...

It is hard to generalize, but I would be willing to bet that it is cheaper to convert more modern, recently built or still under-construction office high-rises.


> It is hard to generalize, but I would be willing to bet that it is cheaper to convert more modern, recently built or still under-construction office high-rises.

No, it's harder.

As anyone who has visited a doctor's office or somesuch in an older office building knows, they tend to be already divided up into smaller, discrete spaces off hallways, like apartments, and often have individual plumbing into each space. Modern office buildings with big, open floors aren't like that.


In some cases, yes, but certainly not in all of them (or a majority of them) here in Detroit (based on photographs taken by urban explorers).

There are many department stores and theaters, for example, that had wide open floors.

Conversions of department stores to residential units is popular in Detroit since we had so many at one time.

The United Artists Theatre high-rise I mentioned has no divided offices from the photos taken by urban explorers: http://www.detroiturbex.com/content/parksandrec/uat/index.ht...

I believe that the Kales Building did not as well.

It is certainly not clear to me that, in many of these Detroit conversations, that just because the space was divided into discrete, smaller spaces that plumbing was run to them.


> The OTA updates do provide an avenue to make cars much safer by reducing the friction for these type of safety fixes.

True, but let us also acknowledge the immense systems safety downsides of OTA updates given the lack of effective automotive regulation in the US (and to varying degrees globally).

OTA updates can also be utilized to hide safety-critical system defects that did exist on a fleet for a time.

Also, the availability of OTA update machinery might cause internal validation processes to be watered down (for cost and time-to-market reasons) because there is an understanding that defects can always be fixed relatively seamlessly after the vehicle has been delivered.

These are serious issues and are entirely flying under the radar.

And this is why US automotive regulators need to start robustly scrutinizing internal processes at automakers, instead of arbitrary endpoints.

The US automotive regulatory system largely revolves around an "Honor Code" with automakers - and that is clearly problematic when dealing with opaque, "software-defined" vehicles that leave no physical evidence of a prior defect that may have caused death or injury in some impacted vehicles before an OTA update was pushed to the fleet.

EDIT: Fixed some minor spelling/word selection errors.


This is a totally fair response since I didn't say that directly in my comment, but I 100% agree. OTA updates are a valuable safety tool. They also have a chance to be abused. We can rein them in through regulation without getting rid of them entirely because they do have the potential to save a lot of lives.


I agree.


It'd probably be just as effective to require that every version of the car software that is made available to the fleet must also be provided to the NHTSA. There's no sweeping shoddy versions under the carpet then.


It is the terminology that exists in US automotive regulations (what little there effectively are).

A "recall" is just a public record that a safety-related defect existed, the products impacted and what the manufacturer performed in terms of a corrective action.

Additionally, I believe that the possibility exists that Tesla must update the vehicle software at a service center due to configuration issues. Only a small number of vehicles may require that type of corrective action, but the possibility exists.

Historically, there exist product recalls (especially outside of the automotive domain) where the product in question does not have to be returned (replacement parts are shipped to the impacted customers, for example).


No, really, this is true. It has nothing to do with how the defect is fixed.

https://www.nhtsa.gov/sites/nhtsa.gov/files/documents/14218-...

https://www.law.cornell.edu/cfr/text/49/573.6

(Except for tires.)


Hmm. Perhaps I should have read the parent's comment more carefully. I think that I might have misinterpreted it.

You (and the parent comment) are correct.

My comment was not intended to argue that a recall prescribed a particular corrective action.


No consumer can purchase any vehicle that is capable of "self-driving" or "driving itself" today.

Tesla's vehicles are not capable of self-driving and, at all times, the human driver is driving the vehicle as both Autopilot and FSD Beta are partial automated driving systems that require a human driver fallback at all times.

The attentiveness required of the human driver with a partial automated driving system is equivalent (on a systems-level) to if the vehicle was not equipped with any automated driving system at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: