Really? Why is the original/classic trolley problem in any way interesting if the future weight of the decision on the human has no bearing? (That's a serious question.)
In the problem statement, an actor has control over whether this 1 person or those 5 people die. If they choose red, then 1. If they choose blue, then 5. (Failing to make a choice is choosing blue.) If after playing the scenario, they knew someone would use a Men In Black pen on them to completely forget what happened, is there a rational argument against choosing red?
(I realize this might read like trolling, but I genuinely don't find the original problem a difficult one if you eliminate the human guilt aspect.)
If you choose to switch the track to kill 1 innocent to save 5, you open the door to things like a society where 1 innocent healthy person is killed to harvest the organs and save 5 innocents. The calculation is the same.
Utilitarianism is very dangerous. Our current societal system isn't really based on utilitarianism as I see it, and I don't think we want to live in a world where a pure version of that school of thought is allowed to have any power.
Actively choosing an action to deliberately kill someone else could be weighed negatively by the machine, it would depend on the programmer.
I don't think you can apply the word "rational" here, if you don't take a position of utilitarianism, not deliberately killing the person could be axiomatically rational. It's not just about guilt.
In the problem statement, an actor has control over whether this 1 person or those 5 people die. If they choose red, then 1. If they choose blue, then 5. (Failing to make a choice is choosing blue.) If after playing the scenario, they knew someone would use a Men In Black pen on them to completely forget what happened, is there a rational argument against choosing red?
(I realize this might read like trolling, but I genuinely don't find the original problem a difficult one if you eliminate the human guilt aspect.)