When I hear "skyrocket" I think "several orders of magnitude in a very short time".
The article actually has quantitative data that shows reports of "incidents where driverless cars disrupt traffic, transit and emergency responders" rising by a couple orders of magnitude in a year.
Yeah, the city officials have to cover their asses and make disclaimers about what they can and cannot conclude, because all they know is that they're getting more reports of incidents; they don't have access to Waymo's data. But given that we know driverless car activity has increased substantially, it seems silly to assume that it must just be a random coincidence that people are now reporting correspondingly more problems.
The city assigned a guy to go around and document every time an AV does something. That's why the "skyrocketing". They didn't count them before and now they do.
If some SFFD guy just stood around documenting all driver stupidity it would a significantly different report.
> The city assigned a guy to go around and document every time an AV does something
Want to provide a citation to back that up?
> They didn't count them before and now they do.
They've apparently been counting since spring of 2022; the large increase in incidents started a year later.
> If some SFFD guy just stood around documenting all driver stupidity it would a significantly different report.
Lovely dose of whataboutism. We know that drivers do stupid things and more or less what the consequences are. It's incredibly useful -- and critical -- to find out what AVs do that are stupid, and in what ways those stupid things differ from what human drivers do. The example of the AV driving through yellow caution tape, hooking a Muni wire, and then continuing to drive another block before stopping is illustrative. We can maybe imagine an unlikely-but-possible scenario where a human driver might do the same thing, but that would be an outlier.
That also raises another point: as an example, we know that some drivers text while driving. Not all drivers do this; hopefully it's a minority. But a bad behavior that one AV does, all of them (running the same software) will do. That's a much worse problem than bad behaviors that a minority of drivers exhibit. (On the other hand, though, if fixing it in an AV is straightforward, you eliminate the problem... that sort of thing doesn't work with human drivers.)
> The example of the AV driving through yellow caution tape, hooking a Muni wire, and then continuing to drive another block before stopping is illustrative. We can maybe imagine an unlikely-but-possible scenario where a human driver might do the same thing, but that would be an outlier.
Are you joking? Do you live in SF? People are constantly driving into subway tunnels.
People are just inured to the omnipresent stupidity of drivers. Imagine how blinded by ideology you would need to be to write down that Waymo is bad because it stopped in a driving lane on three different occasions. As if DoorDash wasn't a global enterprise dedicated to double-parking intentionally!
> People are just inured to the omnipresent stupidity of drivers.
Tell me at least a little about the ideology that starts with omnipresent human stupidity and improves safety by adding a new class of proprietary, intractable-decision-makers to that same system.
You are imputing upon me a philosophy I do not hold. I don't think AVs should be added to MeatVs. I think as soon as AVs reach a practicable level, human drivers of private cars should be banned in cities.
If you want to live in a prison, fine. Don't try to impose it on the rest of us. And don't imagine it's for some greater good. The world you envision is vile.
Send this article to someone trying to argue that cyclists never follow the law and should have licenses and be prepared to roll your eyes at the excuses they come up with...
People don't even consider cars speeding 10km/h over or rolling stop signs or right at red lights going 10km/h a crime, but if a cyclist rolls a stop sign going 5km/h it's goddamn anarchy that needed a solution yesterday.
> That also raises another point: as an example, we know that some drivers text while driving. Not all drivers do this; hopefully it's a minority.
I would be surprised if it was a minority. I am an extremely active pedestrian and cyclist in Toronto and if you pay attention to people in their cars stopped at red lights you'll find a majority will be glancing down at something. It could be a phone.. could be the radio.. could be something on their dash panel.. but when you start noticing how many people react to green lights based on hearing the car in front of them go rather than their sight (because their head is down) it doesn't make me optimistic that they're looking at anything other than their phone.
Sure you shouldn’t be distracted while driving, but distractions occur, and some of them may require attention e.g driving directions don’t make sense, or music too loud. The best time to service this kind of distraction is at a stop light because you have a couple seconds of acceptable reaction time unlike just about any other time while driving. I’m not surprised or concerned about distractions at stop lights, I have seen very little evidence of accidents caused by that kind of distraction. I’m concerned about distractions while moving (especially at speed), and the two are not necessarily correlated.
> I have seen very little evidence of accidents caused by that kind of distraction.
One of the very few actual collisions in SF involving a Waymo was a rear-end accident where the guy behind the Waymo was staring at a phone at a stop, then when the cars in the next lane started moving, he started moving without knowing what was happening. Which is quite common. Looking down at your phone or at the car's console while stopped at an intersection is a bad idea. If you are forced to do it, it's important to keep in mind the need to look around for a moment before you begin moving again.
Based on personal experience, yes if an AV is susceptible to a problem, then all of them are susceptible on at least the same software and hardware configuration. This is the same in every safety-critical field.
The comment is a bit naive though.
What is your statistical model for such failures? let's go with the example and assume that from now henceforth forevermore, all AVs will run through caution tape (which is obviously not true).
What is the frequency of caution tape being drawn across the road in front of an AV vs. the frequency of JUST drivers driving drunk, or JUST speed violations, or JUST texting violations, or JUST distracted driving, etc.?
Though, we definitely ought to hold companies' feet to the fire when they are liable for incidents like this. Which we aren't really doing.
> we definitely ought to hold companies' feet to the fire
To clarify my position, even though I oppose SFMTA's reactionary stance on this topic, I do think Cruise should be sent to the penalty box. Most of these incidents and all of the serious ones like driving through the caution tape were Cruise incidents. I think they should produce a public post-mortem report on that and demonstrate in virtual and practical simulation on closed courses that their thing doesn't do that any more. And I am sure Waymo already incorporates Cruises failure scenarios into their simulations. Call me a Waymo partisan but I don't think Cruise is up to Waymo standards.
The Waymo "incidents" are that it stopped somewhere it should not have, which I view as much less serious.
The reality is that in order to make the technology safe, you need to be able to expose it to the conditions under which it will operate. To me, this is not the issue. The issue is whether companies have the requisite technology in place to prevent a minor problem from becoming a major problem.
The optics to me look as though most of them are scaling their fleets too quickly and assuming they are capable of more generalized problems than they actually are. I believe that they know this and, in essence, nobody is seriously capable of stopping them because there is not enough interest to do so. Part of that is because the companies are so secretive of their technologies and capabilities.
Voluntary reports are not the answer to this. They will always find a way to fudge them. The only answer that will work without stifling innovation with uninformed laws is to simply hold them to a high standard and give them the stiffest penalties the law allows in each infraction. I think this will have the natural effect of forcing them to have smaller fleets they can better control, or at least have safety drivers (which also would reuse fleet size).
IANAL, so I don't know how to make this actually have teeth since I think traditionally the driver bears the liability.
I dojn't think that chart represents anything informative. It's at least partly based on "social media reports" and they say it's "incomplete". Any number of alternative explanations for that chart (which isn't a "skyrocket") explain the results better, such as increased awareness of the cars, increased numbers of miles driven (so the complaint rate per mile is roughly constant), and negative press coverage of incidents.
so the complaint rate per mile is roughly constant
Sure, if the complaint rate per mile is constant, but the number of driverless cars increases exponentially, then yeah we might expect the number of complaints to increase exponentially. That doesn't mean this isn't a problem.
they say it's "incomplete"
Okay, so maybe there are more problems than are represented in the chart, but that doesn't seem to paint any prettier of a picture here.
> The companies point out that, in a city that sees dozens of traffic deaths caused by human-driven cars each year, their driverless taxis have never killed or seriously injured anyone in the millions of miles they’ve traveled.
So, by your logic, the public policy we should be looking at is reducing human diven car miles to a minimum.
If you ignore the rate at which the events are occuring and don't bother to collect any data on the relative rates of other things like standard taxis and delivery vehicles, then your "data" is worthless from a public policy perspective.
If self-driving cars are really safer than human drivers, then tech companies need to prove it by releasing more data than necessary, rather than less. Why won't they release sufficient data instead of forcing the city to gather their own data? This doesn't smell well...
That seems odd. Surely the authority that licenses this sort of activity should have access to all the data that allows proper evaluation of its safety?
Access to such data should be a prerequisite for the agreement.
It’s such a new area legally that there aren’t precedents for this. Consider dietary supplements as an example of a product that doesn’t have to provide their own safety data.
If you actually look at the article, it has the numbers in absolute terms. In April '22 (the earliest month on the chart) there were 3 reports. Following that, we have something that looks like an exponential curve leading to a year later where it's nearly 100 per month.
We can split hairs all day about what constitutes an order of magnitude or what percentage increases mean or whatever, but this doesn't appear to just be statistical noise.
The article actually has quantitative data that shows reports of "incidents where driverless cars disrupt traffic, transit and emergency responders" rising by a couple orders of magnitude in a year.
Yeah, the city officials have to cover their asses and make disclaimers about what they can and cannot conclude, because all they know is that they're getting more reports of incidents; they don't have access to Waymo's data. But given that we know driverless car activity has increased substantially, it seems silly to assume that it must just be a random coincidence that people are now reporting correspondingly more problems.