In the real world someone has to program the self driving system to make the decision about how to react. That is, there is a software team somewhere that is going to have to decide what behaviour to program into the system for the trolley problem. So, your statement that it is not an abstract philosophical question is patently false. Obligatory link to The Good Place making the trolley problem real here: https://www.youtube.com/watch?v=DtRhrfhP5b4
It is hopelessly naive to think that these problems can simply be engineered away. In the real world failures happen in redundant systems. Air brakes are supposed to "fail safe", but in reality a host of factors contribute to accidents: how well a truck or trailer's brakes are maintained, engine state, speed, loading, temperature and grade all combine to make them fail. Trains have multiple braking systems, yet sometimes all 3 fail and a spectacular accident occurs.
In addition to all the traditional mechanical issues, self driving vehicles have tonnes of software failure modes that traditional cars do not. More importantly, those software issues are not well understood at this point in time.
If you want to better understand why software can't be trusted to Do The Right Thing, go back and read investigations analyzing failures of systems that have come before. The Therac-25 is good over here: https://en.wikipedia.org/wiki/Therac-25
No system a human can build can be completely intrinsically safe. Mistakes by designers occur. Safety is a process that takes time and effort, and it will take decades for self driving cars to work out all the bugs.
It is hopelessly naive to think that these problems can simply be engineered away. In the real world failures happen in redundant systems. Air brakes are supposed to "fail safe", but in reality a host of factors contribute to accidents: how well a truck or trailer's brakes are maintained, engine state, speed, loading, temperature and grade all combine to make them fail. Trains have multiple braking systems, yet sometimes all 3 fail and a spectacular accident occurs.
In addition to all the traditional mechanical issues, self driving vehicles have tonnes of software failure modes that traditional cars do not. More importantly, those software issues are not well understood at this point in time.
If you want to better understand why software can't be trusted to Do The Right Thing, go back and read investigations analyzing failures of systems that have come before. The Therac-25 is good over here: https://en.wikipedia.org/wiki/Therac-25
No system a human can build can be completely intrinsically safe. Mistakes by designers occur. Safety is a process that takes time and effort, and it will take decades for self driving cars to work out all the bugs.