I understand, but the error may be in how it interpreted what it sensed. This is callous language, but if the models interpreted the pedestrian as trash on the street, then how it responded (driving over it) is not inappropriate.
> I understand, but the error may be in how it interpreted what it sensed.
It may, it also might not. If sharing data fails, further methods would be needed - but it's a good start for figuring out what data should be recorded for comparison. If the data is entirely incompatible, then we should have regulation to require companies to at minimum transcribe the data into a consumable format after an event such as this.
> ... if the models interpreted the pedestrian as trash on the street, then how it responded (driving over it) is not inappropriate.
If the models saw anything at all, they should not have driven over it. Even human drivers are taught that driving over road debris is dangerous. At minimum it puts the car/occupants at risk - in extreme cases, the driver may not recognize what it is they are driving over.
If this isn't a case where the car was physically unable to stop - it's more likely the telemetry didn't identify the person as an obstacle to avoid at all.