Drivers (both AI and human) may face problems that are essentially ethical trolley problems. While many of these choices are clearly artificial to the point of ridiculousness, the one that gets me best is "should a self-driving car drive itself off a cliff, killing its only passenger, or hit and kill some >1 number of pedestrians?". While the external observation may be "minimising deaths is preferable, so drive off that cliff", are people willing to use a vehicle that might intentionally kill them as an intrinsic part of its operation? Or will market forces result in self-driving cars that make more selfish choices being more popular, potentially resulting in suboptimal prisoners-dilemma style results?
There are also different ethical norms in different cultures about preferences (https://www.wired.com/story/trolley-problem-teach-self-drivi...). While these are edge cases, they're the edge cases people are worried about, and the source of the ill-definedness: "unhurt as much as possible" implicitly chooses some ethical tradeoff that people can easily have different answers to.
Also such meek and suicidal cars would get abused to no end. Just imagine all the assholes today that pass cars in turns with bad visibility or bike in crazy ways. Today they are still paying some attention, because they may easily get killed if the other drivers don't notice quickly enough what they are doing. With meek AIs on the road you can do anything (as long as you bunch up in large enough groups).
There are also different ethical norms in different cultures about preferences (https://www.wired.com/story/trolley-problem-teach-self-drivi...). While these are edge cases, they're the edge cases people are worried about, and the source of the ill-definedness: "unhurt as much as possible" implicitly chooses some ethical tradeoff that people can easily have different answers to.