Other comments addressed the self-driving car part, but I somehow can't get over the part where the author's experience at NORAD is used to explain why we have humans in the loop.
According to Wikipedia [1], US-operated drones have killed 910-2,200 civilians, of which 283-454 were children. This all happened with multiple humans in the loop. Those humans decided occasionally killing children is a perfectly acceptable cost of doing business.
Which is, when you think about it, not too far from how human-driven cars and the car infrastructure operates. We just decided some people are going to die and that's an acceptable cost of business. At least, in case of cars, the population that benefits from having cars are (more or less) the same population that's killed by them.
But don't tell me it's superior because there's an (incompetent, sometimes sleepy, sometimes drunk) human in the loop. That's just lame "At least there's always someone we can blame!" BS.
Of course it’s superior because humans are in the loop!
Listen to yourself. What is your argument here? Because humans sometimes make choices you find abhorrent, therefore it is better for humans to make machines that behave abhorrently instead? This is like saying killing someone is more moral when you hire a hitman to do it for you.
How is it lame to say there should always be someone to blame? How is that BS? You are making an point that seems to have no logical connection to your premise.
It seems to me that moral decisions, whether made to your personal standard or not, must morally remain in the hands of humans. Otherwise, we are saying to ourselves that morality improves when people turn away from reality, ignore facts, embed themselves in fantasy.
It’s perfectly reasonable to decide that a certain level of danger is acceptable in return for some personal or societal benefit. In a just and reasonable society, we must have mechanisms for revisiting and adjusting that calculus, and we must create mechanisms that incentivize responsible behavior. What frightens me about self-driving cars is that instead of a car accident being a dispute between people, it’s now a dispute among corporations. Accountability is diffused and nullified. And to adjust that system requires glacially slow congressional action.
I don’t mind that premature death can happen. My sister died in a car crash in 1985. One thing that made it tolerable is that we knew how it happened and why it happened. The people involved were people who had put themselves at risk in knowable, reasonable ways. My sister wasn’t wearing a seatbelt. It was the middle of a snow storm. The other driver had misjudged the curve. Head on collision. Tragic, but acceptable.
With self-driving cars, your safety is fully in the thrall of inscrutable and unknowable factors, controlled by people who are not themselves at any risk. THAT should offend you.
Please don't cross into personal attack or swipes, and please don't post in the flamewar style to HN. You can make your substantive points without any of that.
What if requiring meat in the loop necessarily means more human suffering? Maybe you're not offended as long as there's someone to blame, but I'm offended at the unnecessary additional human suffering.
"Cost of business" is an extremely biaised expression. First there is no real "cost" associated with it.
Also its makes it sound like a liability on the compagnies when its a lot more accurate to call "risk that consumers consciously decide to take for the sake of their transportation", which will never be 0 as in any activité / sport in life. And no one wants to ban car (or orther adrénaline sport) because they are so useful or provide a service.
If anything there are bilions poured into car security ..
> US-operated drones have killed 910-2,200 civilians, of which 283-454 were children. This all happened with multiple humans in the loop. Those humans decided occasionally killing children is a perfectly acceptable cost of doing business.
That's correct, but it doesn't refute the point of the article, which is that having humans in the loop to make life or death decisions is better than punting those decisions to a computer. Suppose the US had fielded autonomous drones that required no human intervention to release lethal weapons. Would you prefer that to what actually happened?
(And no, saying that you'd prefer that the US not operate the drones at all is not an answer for this discussion, because the analogy to that would be not allowing driving at all. And that's obviously a non-starter.)
> We just decided some people are going to die and that's an acceptable cost of business.
Again, while this is true, it doesn't refute the point of the article, which is that the cost is less with humans in the loop. Would you prefer that automated Teslas running over pedestrians in crosswalks were the new normal, and we just wrote that off as a "cost of doing business"?
That video is pretty eye opening for me. If it were my car I would have wanted to stop. But I also think most people are going to very quickly hand the wheel to the computer as soon as they have the opportunity. There's not going to be closely supervised self driving where a human would intervene in the 3 seconds it takes to reach the intersection, everyone (with exception of the blog author here) will be nose down staring at a phone. Solutions will have to come from elsewhere: better signaling between pedestrians and vehicles, stricter regulation on how automated driving must behave in situations.
Many questions come to mind. Is this lack of consideration for pedestrians specific to Tesla cars, it all self driving cars?
I notice that the US has a car-first culture and infrastructure that is hostile to pedestrians, is this merely a reflection of that culture? The reason it comes to mind is if this car fails to stop for people how did it get past safety regulations?
It is certainly true for Tesla cars as they are developed with a total lack of concern for even basic safety protocols and Tesla has deliberately under-classified their autonomous vehicle systems so that they are not required to prove the system is street-legal or issue any reports to the US regulatory agencys.
Due to their under-classification, there are no safety regulations at all in the US that prevent deployment on the roads. This is in contrast to the EU where FSD is not allowed or available because it can not get past EU safety regulations.
FSD on my Tesla hit a cone once. It was driving through construction which forced us into the opposite side of the street and really wanted to be in the lane that had active construction on it.
Because of this, it had a lot of trouble calculating trajectories and decided the best course of action was to hit a cone dead center.
I emailed the FSD Beta team about this, and they didn't respond.
I still use the system but my trust in it has been eroded quite a bit.
Honestly, I often think about trading the car in for a "normal" electric car that doesn't have any self-driving features so that I'm not tempted to use them. For super long drives, LKAS is enough.
If FSD has already proven itself untrustworthy why on earth would you still use it? You’re lucky it was just a cone you hit. The next time it could be a person.
Why anyone is okay with beta testing these things with live non consenting humans around is just baffling.
Because the immediate downside risk is small and the upside benefit is incredible. Even with very low confidence in self driving vehicle development, eg. 1% chance of success in 10 years, the upside is overwhelmingly net positive in terms of lives saved, on top of all the other benefits of automating transport like freeing almost 10% of human drivers' waking hours in the US. If the risk taken was anywhere close to what it would have to be to scale down at the cost of slower deployment for safety reasons, we would know.
Because until the legalese says otherwise, I'm still responsible for driving the car, even if FSD is doing the driving, and it is still really good at driving super long stretches of highway for you
I think we are perfectly capable of producing very safe self-driving cars, on a technological level. We have nearly automated other means of transportation that we consider to be among the safest ones nowadays.
The problem may be that we are trying to solve a lack of hardware using software. Level 4/5 self-driving probably requires specific infrastructure to be installed on all roads, a way to coordinate collision resolution among all cars of all manufacturers (something similar to TCAS in aviation, but operating with a higher number of vehicles involved and on a much shorter time frame), and most likely a ban on non self-driving cars.
Of course, this carries an economical and social cost that is unacceptable nowadays. So we are trying to make self-driving cars work on an infrastructure that was never designed for automated use (and sometimes even makes human drivers confused), by giving a regular car with a few extra sensors to an AI driver and trying to get it to achieve human-like capabilities as an intelligent agent, which is beyond our technological ability at the moment.
> Never trust a computer to make life-or-death decisions for you
But it will kill statistically fewer people than you will. You are a imprecise bag of mostly water with slow reaction times that gets tired.
The responsible thing is to know when you are not suited for a job and get out of the way, so lives can be saved.
We can argue about whether Tesla is at that point of course and whether they have adequate testing and precautions, but the best way statistically to avoid killing people is to realize you should no longer drive (when the technology has matured to that point and Tesla may not have).
I have killed statistically 0 humans with a motor vehicle of any kind. I am a very precise bag of mostly water that has a vast corpus of knowledge on how to correctly operate a motor vehicle in any weather and traffic conditions and I can also problem solve my way out of novel situations. Additionally I can accurately model the road surface, weather conditions, current traffic pattern, and account for pedestrians without any significant mental or physical exertion on my part. IF you or others you know cannot say the same then yes, they should certainly surrender their license and find other modes of transportation. When driving AI has been produced that can provably outperform me in the rain, in ice and snow, and on the track, I'll consider stepping aside. That day hasn't arrived and isn't likely to any time soon.
You may be great. Others -should- turn in their license. But clearly this approach has failed, as a million dead people every year can testify to.
You're right, replacement day has not arrived yet. (Especially not by tesla). But it is coming. And I suspect this generation of drivers wont consider stepping aside, much less actually doing so.
But my kids didn't have the same desire to drive that I did. For them it wasn't a big deal, there were alternatives (like uber). They did learn, but only pretty much cause they had to. My grandkids probably won't learn. My great grand kids certainly won't. No more than I can ride a horse.
I hear you. I'm not great, just a product of my generation. I was expected to be able to operate literally anything with wheels and most things with treads by the time I was 18, that's pretty normal for folks growing up rural in the 80s. I'm not trying to fight the tide here, the point I'm trying to underline (and that we appear to agree on) is that the tech simply is not on a level where replacement of human operators is sanely viable.
One day Mavrick, your kind will be extinct. We don't need drivers who make mistakes, need to rest, make bad judgement calls, have no patience or consideration.
Sure, it's not today, butvyhe day is coming. Its not far away now (on human time lines, not necessarily tech time lines).
That's just it. A career in software engineering has lead me to believe that that day may never come, that the industry will, as it so often does, strike upon exactly the right focus-group tested line of bullshit to drive mass adoption without ever hitting the technical requirements that would make that ethical. If the day does come I'll nap my ass for a lap around the country on one truly epic road trip. I just hope the driver is smart enough to not park directly under a street light while I'm trying to sleep.
> We don't need drivers who make mistakes, need to rest, make bad judgement calls, have no patience or consideration.
But eliminating human drivers isn't just eliminating that. It's also eliminating humans who are not tired, who are making good judgment calls, and who do have the patience and consideration to drive responsibly. Unless you are claiming there are no such humans anywhere, which is ridiculous (the majority of human drivers meet the requirements the majority of the time).
For self-driving technology to be truly viable, in other words, it's not enough for it to outperform humans who are tired and impaired. It has to outperform humans at their best. Otherwise the humans who aren't tired and impaired will have no good reason to use it, and will have an ironclad objection to being forced to.
> When driving AI has been produced that can provably outperform me in the rain, in ice and snow, and on the track, I'll consider stepping aside. That day hasn't arrived and isn't likely to any time soon.
This is a different argument from the one in the article. You are arguing that you will outperform the AI and therefore should remain the driver. And you want proof to change your mind.
Both are reasonable and I agree.
The author isn't demanding proof that it is statistically better before switching. They object to giving up control and view that as a moral imperative, "no matter how “safe” the computerized features are."
True, which doesn't invalidate the author's point. I'm merely adding a second set of considerations to the discussion. I think it's worth drilling down a little harder on the author's point though. They correctly note that there are no life-or-death systems that are wholly computerized with no human supervision. To me that speaks rather strongly to either a lack of awareness or lack of ethics on the part of drive AI advocates, as they're cheerleading something the military, aerospace, and medical industries have rejected on merit.
Most industries don't have a million deaths a year as incentive.
For most industries zero deaths is the norm, so clearly computers have to be at least that good. Frankly we're so collectively bad at driving that we'd be an order of magnitude better if the computers only killed 100 000 a year.
In over 100 years of driving humans have, at best, gotten worse. We continue to kill despite cars being better and safer.
On the other hand 100 years of improvement to self-driving cars will (I predict) drop those deaths to under 1000.
You're attributing to operator error what is more likely caused by poor vehicle design, incomplete understanding of crash mechanics, and immature material science and manufacturing techniques. As early as the late 90s you could still buy a car that would force-feed you the steering column in a head-on collision.
I feel operator error (drunk driving, disregard of road rules, speed limits,signage, deteriorating vision) and so on are the cause of most road deaths.
Referring to the effects after a collision, and not the cause of the collision itself, is valid, but secondary.
Even the majority of people who are so called bad drivers have killed zero people, and no one is going to call themselves a bad driver, fewer are going to admit an AI is better at them (even if it's statistically true).
Do you have a statistic to show this last part? The only stats I have seen are with self-drive in "clean" areas versus a stat from "everywhere". Most self-driving cars don't run in bad areas of Detroit or Bombay but people driven stats count these areas too.
I was speaking somewhat hypothetically, as for now I would agree that it's tenuous at best to call self driving cars safer. My point was more that even if they were provably safer, people would have ego problems in claiming they are more dangerous than self driving cars.
All very true and valid points. I, for one, am willing to put my money where my mouth is. If there comes a time where a general purpose driving AI can consistently outperform my track times and not drop the plot in weird suburban situations I'll gleefully take the back seat and catch a nap while the car drives.
This is classic “I’m special”ism and it’s all an illusion. You are human, you make mistakes, you get tired, you overestimate your capabilities and your reaction times are much much slower than a computer.
There is nothing special about you that makes you more able to drive cars than anyone else and especially compared to a computer. In a few years it will considered crazy and unsafe to drive your own car and comments like this will be chuckled at.
> In a few years it will considered crazy and unsafe to drive your own car
In a few years? Surely you jest. Self-driving tech is nowhere near being able to outperform all human drivers, which is what would be required for your statement to make sense.
> But it will kill statistically fewer people than you will.
A post from another article claims that these vehicles have undergone 3 million miles of testing without causing death or serious injury. The annual loss of lift due to human operated vehicles is estimated to be 50 thousand, across 3 trillion miles of driving. There has not been enough testing done to prove that self-driving cars are any safer in this regard. If you have data suggesting otherwise, please provide it.
> The responsible thing is to know when you are not suited for a job and get out of the way, so lives can be saved.
The introduction to this article suggests that it is people who must make the final decision because machines are fallible. They may have super-human attention spans and be able to make precise calculations regarding a scenario, but they lack judgement. They cannot determine whether something is reasonable, like 300 missiles being launched from the Arctic ocean.
Humans may be imperfect, and they certainly make bad judgement calls, but the reality is that we don't have sufficient evidence to suggest that self-driving cars will be any safer. Until we have enough evidence to prove their safety, there should be a human making that final call.
> It will kill statistically fewer people than you will.
Speak for yourself.
Individual human beings are not statistics. If you have evidence that someone is an unreliable driver, sure, take steps to deal with that. But don't tell people with an unblemished safety record at driving that "statistics" say they are unreliable.
I object to the 1.3 million people that are killed by human drivers every year. They should be banned and autonomous vehicles only allowed on the road, without steering wheels.
Why not ban individually owned and operated cars, and create a focus pedestrian infrastructure and trains then? This technology already exists, it’s healthier, and cheaper!
The technology exists to have only trains get close enough to everywhere that you can only use train and feet? Honest question, can you get by with only trains even in Japan?
I think the hybrid approach here might work well. Allow self-driving cars, but limit their speed to one in which they are very clearly safe, maybe something like 10 to 20 mph currently. This would cause most people most of the time to use mass transit, and if the last mile(or first, getting to the train station) of their trip required it they could opt for it.
Even for people who have a perfect, unblemished safety record at driving? Which, btw, is the majority of drivers on the road?
Basically, you're saying you're fine with punishing the responsible people who take driving seriously and do it safely, because of the mistakes of a few irresponsible people who don't. That's backasswards. You should be focusing on the irresponsible people and leave the responsible ones alone.
Ok there are concerns about self-driving cars, and clearly the tech is not quite there yet, but I fear the premises at the heart of this article is poor.
Firstly, the tech of the 70s and 80s which he's referring to is clearly irrelevant when talking about the state of the art today.
Secondly, the penalty for failure for incorrectly launching nuclear missiles is, well, somewhat severe. You are pretty much ending the world.
Granted driverless cars will kill people. I think one has to start there. I dontvthinkvthe standard can be, or should be, zero deaths. [1] The question is whether they'll kill fewer people than human drivers (currently around a million a year.)
Self driving cars will get better. Each improvement will persist forever. Ultimately deaths will become newsworthy. Which is to be hoped for, because right now those million deaths a year are ignored.
[1] every day (unpredictable) people are involved in and endless game of chicken - with other drivers, with pedestrians, cycles and so on. Most people err on the chicken side most of the time.
Self driving cars are (and will become more) predictable. It's easy to play chicken with a computer that doesn't care about you - you yield early and quickly. Being predictable makes the game much healthier - for the other player.
that's an awfully bold claim with regards to a field that seems to be now-and-forever software dominated.
eventually the hardware will stabilize into a 'good enough' equilibrium like any commodity electronic device -- i'm not as comfortable saying that about the software given market/corporate incentive structures as they exist today and the (so-far) inherent 'always-connected' automotive platforms.
"Self-driving" cars is not necessarily new technology in the sense of people shooting down new ideas. It's been shoved down everyone's throats for over a decade now. It's not coming any time soon, despite the unending promises and claims, and companies have gotten away with pushing regulations and skirting the law to perform R&D and testing on public roads that has killed several people. I think it's pretty reasonable to be tired of this. Same goes for VR, crypto, "AI", and every other over-hyped and over-promised technology over the past decade.
In particular to self-driving cars, it is a distraction from other technological development we could be doing, it continues to hold up the car as a source for peoples' affection and keeps cars relevant, and the need for self-driving cars has never been articulated in a way that makes any sense.
Uh it literally isn't here. It'll be here when you can let a tesla off it's leash in every form of adverse driving conditions known to man and it consistently out-perform a human driver, including in novel situations. What's being tested is the minimum viable approximation that legislators are willing to tolerate on our roadways.
Tesla is not the front-runner; it hardly even classifys as in the race.
Waymo and Cruise are currently testing systems with no driver that do at least a passable job in most standard city driving conditions for (self-reported) hundreds to thousands of hours between traffic incidents. Humans average a few 10K-100K hours between traffic incidents so it is at least within the general ballpark. It remains to be seen whether they can improve adequately and robustly to be better than human drivers.
It is literally here in the sense that there are cars with no drivers that can operate for on the order of years (self-reported) without a critical failure that are currently in testing. It is not here yet in that human drivers are amazing and we do not know if AVs are better than human drivers as there is insufficient, unbiased evidence and the more extensive testing with safety drivers indicate they are still a factor of 10x to 100x away.
Excuse you? How is demanding self-driving cars demonstrably improve on the performance of their human counterparts shifting goalposts? What, we're all supposed to just sit with our hands in our lap and accept sharing the road with a mediocre implementation of what humans can already do?
> in every form of adverse driving conditions known to man
I don't need, for example, a self driving car to be able to handle snow particularly well, because it does not snow where I live. I don't need it to be able to drive well off-road because there are roads everywhere I go. I don't need it to be able to tow a trailer or a caravan, or drive a heavy vehicle. Self driving technology does not need to surpass humans in every single driving condition in order to be useful to a large number of people.
And if self driving cards are cordoned off to special areas, they are just another extension of buses, subways, trains, but with less manual operation needed. The criteria that self driving vehicles perform very well everywhere is extreme when we already have so many modes of transportation designed for specific contexts.
Then call them and market them as something other than "car". Because if you call it a car and market it as such folks will treat it like one, which means exposure to every type of road and weather condition known to man with potentially lethal consequences if the software can't keep up.
Nothing even tangentally related to the software industry, no. There's some exciting stuff in the biotech pipeline but software has been a dumpster fire of grift, hype, bullshit claims, and labor issues for at least couple decades now.
You're right. Software peaked in 2003. There's been no useful things added since then.
Personally I never use Google Maps to find my way, I never use Uber to commute, I always book hotels and flights through a travel agent, all my computers run XP, I watch all TV from plastic DVDs, I listen to music on more plastic discs, I go into the bank or write checks to do any transactions.
Would you care to review the social and geopolitical impacts of social media, how walled gardens have destroyed the original promise of the internet, the financial and labor impacts of Amazon's near-monopoly on online sales, crypto's blistering track record at separating dumb from money at the modest cost of the energy consumption of a medium sized industrial nation, VR & AR's notable lack of life/world/job/industry altering implementations, vendor lock-in, the industry's holy war perpetual motion machine on thick vs thin clients, a few hundred waves of clients having their investment cash set on fire by dev shops platforming on tech only truly suitable for a FAANG because "it's the great new thing", FOSS's implosion into a free labor pool for private companies, or the successive waves of failure of software .orgs claiming to improve everything from healthcare workflows to the functioning of municipal governments by adding software? No? You can take your strawman and go.
>> software has been a dumpster fire of grift, hype, bullshit claims, and labor issues for at least couple decades now
I absolutely agree there's a bunch of crap. I argue that there's a bunch of really excellent good things in there too. (I'll add rocket landings to the list because they are so insanely cool.)
With regard to crypto; it's just the latest in histories long list of get-rich-quick schemes. I think we can ignore that. (Human greed is human greed).
I think most of your list boils down to a mix of shitty humans, the Free market in action, and things playing out to their natural conclusion in an extreme capitalistic society. A lot of the things (not all) are very localised to the US.
I'd hear an example of really excellent good things. Privatization of space exploration doesn't make the list for a diversity of reasons, not the least of which being kessler syndrome is a thing and handing the keys to orbit material to private industry has -very- predictable outcomes.
You act as though that's everything in the world of technology. And even then, technology is not everything. There's plenty of things going on that have no need for technology, much less over-hyped and over-promised technologies.
Upvoting can mean you support the discussion, not just that you support one view or another.
To your point though, I think two shifts in attitude are in play. One, technology has a higher bar to reach when it comes to safety. We accept that to err is human and expect that a human driver will have remorse and shame and may even be punished. With a self-driving car these things don't exist. Two, we have a lower tolerance to death and injury. Imagine cars didn't exist (but for some reason roads did) and you invented the car with all of today's safety standards. You would never get it approved.
Upvoting promotes discussion which is good. Upvote interesting topics, whether you agree with them or not. It's not a vote on the point of the article.
Same for comments, voting is not agreeing, its saying that a point us well made and adds to the discussion.
As to HN aversion to new technologies - thats a mix of an aging demographic (fear of making my job obsolete), a reflexive aversion to hype over substance, (a thing which nerds have tradionally been against because, well, we are bad at hype and view people good at it as undermining our substantive work), and in some cases because there's objectively not much value in the tech even if it succeeds.
It's the way of the world. People whine about new stuff all the time, from their hand-held-hi-tech computer, over the connect-everything network, while binging their favorite sitcom from the 90s. Because, you know, those techs are from the oldentimes .
All tech should be subject to multiple angles of scrutiny. That's a good thing. Else HN just risks become an echo chamber like many subreddits about specific things
A car autonomously drove across the US in 1996. Self-driving cars aren't a new technology. Check out Drive: The Race to Create the Autonomous Car by Alex Davies for some good background.
Hmm. I watched the video a couple times. Didn't understand the issue at first, second watch I saw it.
This isn't proof, it's the author selectively choosing a particular video that supports their point of view. Maybe the video actually does demonstrate a problem with the self driving software, but without more supporting data it's simply cherry picking.
While I support a person in the middle kind of system, I don't think the author supports his case well. It's an opinion piece, basically.
So, the argument is: "I worked on old tech that didn't work without human's in the loop therefore no technology will ever work without human's in the loop"? I don't follow.
Incorrect. There is plenty to discuss yet about self-driving tech's current state of play and how that compares to a trained human operator's abilities in identical situations. There's also the evergreen topic of liability in the case of accidents that has yet to be resolved to my satisfaction.
This isn't very well reasoned. In the depicted situation the cars and pedestrians paths don't intersect if the car simply keeps driving and the person keeps walking and if the car slams on the brakes it could hurt the driver by stopping abruptly or slow down enough to turn a non-intersecting path into a crash without coming to a stop soon enough depending on conditions. EG the car needs 3 seconds to pass a given position and the person can only cover 60% the distance in that time but slowing makes it take 5 and the now the pedestrian ends up under the wheel or on the ground.
As an aside he's an obnoxious religious fundamentalist who believes the current state of the country is caused not by concrete dysfunction from one side but by lack of Jesus and this is the third individual of questionable character on these pages today, all from you, and none of them have much interesting to say in the blog posts you have shared. Bad people sometimes have ideas worth sharing but you aren't exactly batting a thousand here. Please find more interesting things.
For every complicated question there is an answer that is simple and wrong.
The person becomes visible as separating from the cross walk in the video 1 second in and intersection with the crosswalk happens about 3 seconds in. A reaction time of 1 second gives you 1 second to bring the car to a complete stop.
Pedestrians playing frogger by stepping out directly in front of cars that can't reasonable react in time are often deemed at fault. In one particular groan inducing situation a pedestrian was successfully sued for the damaged caused by the collision.
Beyond legal liability slamming on the brakes in this situation is just logically the wrong thing to do. A good driver would note that they are going to pass the pedestrian without incident. A bad driver slamming on the brakes drastically increases the chance of an accident in an attempt to slavishly follow a poorly understood interpretation of the rules at the expense of actual safety.
If you slam on the brakes at one second you will come to rest 3 feet from the pedestrian who will predictably freak out.
If you slam on the brakes at 1.5 seconds you probably hit them.
If you slam on the brakes at 2 seconds you will probably come to rest in front of them pointlessly.
If the pavement is wet or your car is even slightly shitty you will almost certainly hit them
In any case you risk injuring yourself, risk a collision with another car which you will be responsible for.
An excellent point. Are you proposing the vehicle in question performed all of that analysis and came to the conclusion that rolling through was the right move or did it merely ignore the crosswalk? Because I can't call it based on that picture and there's been some fairly alarming incidents involving people fucking around with FSD recently that lend credence to a more negative interpretation.
According to Wikipedia [1], US-operated drones have killed 910-2,200 civilians, of which 283-454 were children. This all happened with multiple humans in the loop. Those humans decided occasionally killing children is a perfectly acceptable cost of doing business.
Which is, when you think about it, not too far from how human-driven cars and the car infrastructure operates. We just decided some people are going to die and that's an acceptable cost of business. At least, in case of cars, the population that benefits from having cars are (more or less) the same population that's killed by them.
But don't tell me it's superior because there's an (incompetent, sometimes sleepy, sometimes drunk) human in the loop. That's just lame "At least there's always someone we can blame!" BS.
[1] https://en.wikipedia.org/wiki/Civilian_casualties_from_U.S._...