I think a key disconnect is that the 'incidents' being complained of most are not collisions, but they can still be disruptive, and indeed dangerous.
Suppose one were to just install brightly painted immobile bollards on streets, and insist they were "driving" just very very slowly. They wouldn't hit anyone. They wouldn't kill anyone. They would piss everyone off.
This disconnect is repeatedly part of where this conversation gets tripped up.
> “Cruise’s safety record is publicly reported and includes having driven millions of miles in an extremely complex urban environment with zero life-threatening injuries or fatalities,” Cruise spokesperson Hannah Lindow told The Chronicle.
> The city’s transportation agencies documented several incidents where driverless cars disrupted Muni service. During the night of Sept. 23, five Cruise cars blocked traffic lanes on Mission Street in Bernal Heights, stalling a Muni bus for 45 minutes. On at least three different occasions, Cruise cars stopped on Muni light-rail tracks, halting service.
Ok great, they didn't kill or injure people which is nice, but they were _disruptive_ in a way which a human driver would not have been. And critically, we can't _have_ a fact-based conversation around those non-collision incidents because these companies aren't even required to report them:
> But officials said it’s been difficult to assess their effectiveness because companies aren’t required to report unplanned stop incidents — some of which have been captured on social media — when they happen.
> San Francisco officials want the state to require that companies report incidents when they happen.
I've certainly seen cases where self-driving cars behaved in a way that I would consider reckless if a human was at the wheel, but were not collisions, did not result in injury, and I expect will not form part of any statistics reported to any government body ... but I think that's too low a bar.
I've certainly seen cases where self-driving cars behaved in a way that I would consider reckless if a human was at the wheel, but were not collisions, did not result in injury, and I expect will not form part of any statistics reported to any government body
A few weeks ago I saw one blow a red light. Nobody hurt or killed, so it wasn't reported to any responsible agency. It might not have even been tallied by the company, if the car thought there wasn't a problem. But its action was clearly unsafe.
It's hard to get a grip on the problem when the data is so faulty.
Someone posted an example of a self-driving Cruise car that appeared to run a red light https://www.reddit.com/r/sanfrancisco/comments/14wyyzw/just_.... Although in that example, technically the self-driving car was guilty of entering the intersection without sufficient space on the other side, rather than of crossing the limit line while facing a red light.
Suppose one were to just install brightly painted immobile bollards on streets, and insist they were "driving" just very very slowly. They wouldn't hit anyone. They wouldn't kill anyone. They would piss everyone off.
This disconnect is repeatedly part of where this conversation gets tripped up.
> “Cruise’s safety record is publicly reported and includes having driven millions of miles in an extremely complex urban environment with zero life-threatening injuries or fatalities,” Cruise spokesperson Hannah Lindow told The Chronicle.
> The city’s transportation agencies documented several incidents where driverless cars disrupted Muni service. During the night of Sept. 23, five Cruise cars blocked traffic lanes on Mission Street in Bernal Heights, stalling a Muni bus for 45 minutes. On at least three different occasions, Cruise cars stopped on Muni light-rail tracks, halting service.
https://www.sfchronicle.com/projects/2023/self-driving-cars/
Ok great, they didn't kill or injure people which is nice, but they were _disruptive_ in a way which a human driver would not have been. And critically, we can't _have_ a fact-based conversation around those non-collision incidents because these companies aren't even required to report them:
> But officials said it’s been difficult to assess their effectiveness because companies aren’t required to report unplanned stop incidents — some of which have been captured on social media — when they happen.
> San Francisco officials want the state to require that companies report incidents when they happen.
I've certainly seen cases where self-driving cars behaved in a way that I would consider reckless if a human was at the wheel, but were not collisions, did not result in injury, and I expect will not form part of any statistics reported to any government body ... but I think that's too low a bar.