I did the problems under the assumption that "pulling the lever" was an act, while doing nothing was not acting. Implied legal (and moral) liabilities made a difference in my choices.
Legally perhaps, but morally, I've never gotten why so many people think that the physical act of pulling or not pulling makes so much difference.
It's a binary decision with two outcomes, in my personal view it is irrelevant which of the outcomes is caused by physical action and which by inaction (at least, supposing that you have enough time to think about what to do - obviously, if you have to react in a split second, it's understandable to be biased towards inaction because you may need more time to make the right decision, but that's not the point of these problems in the way they are posed).
If you have enough time to think what to do, inaction is a conscious choice and if it does more harm than good, you are guilty of not choosing action.
You go from passively letting someone die to actively killing someone. Which is a major difference.
Now, you might think that it isn't given enough time, but it's easy to argue that you're currently letting kids die in Africa by your inaction (or Ukrainians or homeless people etc.). Being slightly at fault for someones death is basically a permanent state of affairs, whereas actively killing someone is something few people would be willing to do. It definitely makes a difference.
Consider this version: there is someone on the track and if I don't act, they will almost certainly die. Or I can act and almost certainly save them, but it sends the trolley somewhere unexpected.
If I were a professional rail operator, I could take action. I know how to operate the switch, I know where both of the lines go, I can contact the trolley operator to let them know what happened. As a bystander, I would not take action. Maybe it would save that person, but dooms all the passengers on the trolley. Maybe someone had it under control and I'm throwing a wrench in their plans. In general, we probably don't want random bystanders to be messing with the operation of heavy machinery.
Sometimes there is a duty to act: if I am babysitting a child and they are suffering in the cold, I better bring them somewhere warm. I may have some responsibility if I see someone suffering in the cold and I have an extra blanket that I could give them. But I am not responsible for buying blankets to give out every time it is cold, nor am I responsible for throwing railroad switches. In those cases, I am more culpable for my actions than my failure to act.
>Being slightly at fault for someones death is basically a permanent state of affairs, whereas actively killing someone is something few people would be willing to do. It definitely makes a difference
It only makes a difference in the eyes of the beholder, not for those actually dying
Not thinking too hard about reality because reality is nasty is something of a huge bug in the human cognition. Trying hard to not know anything is even desirable from that point of view: it's not neglect if you never bothered getting to a state where you are capable of being responsible for anything
> It only makes a difference in the eyes of the beholder, not for those actually dying
Unfortunately, that's the one (not) making the decision in this case.
> Not thinking too hard about reality because reality is nasty is something of a huge bug in the human cognition.
Is, though? I mean, in general we should strive to improve the world, but does getting overwhelmed by all the bad things that you should be doing something about really doesn't help.
Maybe if we were overwhelmed by all the bad things that we should be doing something about it would eventually lead us to a point where we would be doing something instead of just living our lives doing nothing (for the most part)?
If we are already talking about changing the human cognition, might as well add an ability to function while being aware of the issues
I have choice paralysis while buying pants. I'm 100% capable of grabbing a pair and swiping the plastic rectangle. But I just don't know a good solution. I'm afraid of picking wrong.
And that's just pants. Hell, even in math -- the most rigorous subject ever -- there are undecidable problems. Asking me to decide squishy issues of who lives and dies, who I should spent resources helping (including myself), etc...that's so many orders of magnitude more complex, so many immediate and cascading future consequences, of unimaginable import. It's crushing, and maybe without a correct (or even good) answer! It's pants ultra out there. If you make me internalize all the issues, then the only decision I'll make is which corner to lay by in the fetal position.
The only idyllic "human cognition" fix here is basically omniscience. And that kinda feels like cheating, y'know?
The trolley problem is set up in a specific way for a specific reason: the person standing next to the lever has no barriers to action, and comprehends the immediate consequences of their choice. This is not so for the kids in Africa example you give.
The trolley problem does not extend readily to systems involving incomplete information or some kind of inaccess to enacting a solution. That is why moral philosophers despise it as an illustrative example -- armchair ponderers apply it with gross negligence to the context and nuance present, so it ends up illustrating nothing much except ignorance for the details of the thought experiment.
> I've never gotten why so many people think that the physical act of pulling or not pulling makes so much difference.
By pulling the level, one is intentionally killing a person, to save five. In more general terms, a life is taken away for the greater good of society.
A foundation of societies (modern ones, and I guess, some more than others) is that taking a life away has highly specific restrictions, which are usually justified by the person doing harmful acts (representing a danger to other people). To put it in another way, any citizen has the guarantee that, unless they do something harmful, they're safe. It's a contract between the citizen and the society.
By pulling the lever to save one, the contract is broken. To be consistent with the societal principle, it should be the person on the alternate track to decide whether the lever should be pulled, not an observer.
(The above reasoning is based on a very generic vision of the law. If anybody has some details, they're very welcome :))
Yes, it was broken, but by someone else, not me. I am not at (moral) fault for someone else breaking the contract.
But if I pull the lever (or pushed a fat man onto the track that would halt the train), I am the one who is responsible for the resulting death, which makes me the one who broke the contract.
That's the major dilemma here, because pulling the lever is the utilitarian-correct choice that saved more lives, but whoever pulled the lever went from being innocent to being a murderer.
Interestingly the website poll indicates two thirds of people would not pull the lever to have the train run over themselves to save five people. So if you ask the guy on the other track, he seems unlikely to assent to pull the lever in time to save the other people.
No, but I almost never pulled the lever anyway and it cost 80 souls to "solve philosophy". It was almost more but I didn't quite save the sentient robots because I didn't believe they were really sentient.
You probably use a consequentialist meta-ethical framework. This is great -- I do too -- but in a deontological or virtue-based system it may or may not work out this way. We've already got an example of such a deontological system here: if one takes the (somewhat unusual but not unheard of) view that legality implies morality, then there's a moral difference because there's a legal difference.
The trolley problem is meant precisely to highlight these differences, and I think it's one of the best arguments for consequentialism. In my less charitable moments, I like to refer to the concept of privileging inaction as an informal fallacy. It's not really, though; it's just that folks have different philosophical starting points.
Then again there are also deontological aka categorical morality frameworks that see pulling the lever as acceptable, that it is not “killing” but the life of the man in the other line is lost as a secondary effect, which must be gravely considered. There are yet other versions of the trolley problem that have you push a man in the way of the trolley to stop it and save the five. Sometimes people pose that variant as though it must be accepted as morally equivalent, but I find the importance lies in why we find the situation different.
In the deontological view, both pulling the lever and inaction may be perfectly permissible.
Consider: Who knows what happens? It's perfectly possible the problem description lied, and the opposite thing happens. Either way real life doesn't come with problem descriptions. Since "what happens" is profoundly inaccessible, the more important question is what you wanted and why.
So, it's permissible to redirect the train away from one and towards many, if directing it away from the one was what you wanted. It's equally permissible to redirect the train away from the many and towards the one, if directing it away from the many was what you wanted. It's easy to imagine a world where you got what you wanted, but the bad thing that looked like would happen as a side effect didn't happen. Maybe that's the one we live in.
However, if you really wanted a specific person on the track to die, then you should pull it away from them. Not for their sake, but for yours. What happens is still uncertain, but the important thing is that you did not act on this bad desire.
(By the way, virtue ethics is just a stupid "third way" branding exercise. To say goodness isn't derived from outcomes is fine. To say it is, is at least a coherent position. But thinking you can dodge that problem by talking about "virtues" instead is just nonsense.)
There's a version of the problem that tries to highlight this:
A runaway trolley is heading down the tracks toward five workers who will all be killed if the trolley proceeds on its present course. Next to you is a stranger who happens to be very large. The only way to save the lives of the five workers is to push this stranger onto the tracks where his large body will stop the trolley. The stranger will die if you do this, but the five workers will be saved.
Does the act of physically pushing a person onto the tracks make it different than pulling a lever?
To me, that version just highlights how far-fetched it is to actually find yourself in this situation with all the provided knowledge about the situation and confidence that you're not mistaken about or missing any important details.
It's so far-fetched that it's intended to be a thought experiment rather than a role play, which I think people miss. The <i>entire point</i> is to factor out all those innumerable details which complicate every real-life situation to see if there are underlying principles that can be illuminated. It's not about whether it's a trolley switch or a gun or a baby-grinding machine, it's about if there's an answer to how many babies you're willing to grind up and for what.
But it’s reasonable to question the usefulness of any supposed insights gained from thinking through one’s answers to the trolley problem. It’s quite conceivable that no insights can be extended at all to any scenario in the real world.
Inaction is overwhelming if you think about it. by inaction you are guilty of every sufering person around you. And you can expand that pool by saying that your inaction to find out more sufering people is also part of your guilt. of course provided you are already not doing your max right now to help tham. but hey, youre on HN, so i assume you still got free time to spare ;)
just wanted to add that inability to help EVERYONE, should not deter you from helping anyone. you should not feel guilty that you are helping one person but leaving everyone else in trouble. One is better than zero.
> I've never gotten why so many people think that the physical act of pulling or not pulling makes so much difference.
One reason could be because your presence at this railroad switch is exceptional or at least unusual in some way. Maybe its worth considering what would happen if you weren't there at all.
When working in a new codebase, it's generally better to assume that something odd is the way it is for a reason, rather than changing it to something that seems right (easier to understand) to you. This is because in the real world, there is so much you don't / can't yet know about a situation that you're thrown in to.
I guess this is kind of reflected in the switch example where its 5 people that tied themselves to the railroad vs. one who didn't, and something like 85% "choose" the 5. How do you know how they got there? That implies so much prior knowledge and background that isn't really considered in the oversimplified "choices" in the website. Maybe they were forced to tie themselves to the railroad at gunpoint, or maybe it's a weird death-by-train suicide cult.
> How do you know how they got there? That implies so much prior knowledge and background that isn't really considered in the oversimplified "choices" in the website. Maybe they were forced to tie themselves to the railroad at gunpoint, or maybe it's a weird death-by-train suicide cult.
This kind of reasoning really defeats the point of the thought experiment, which is to construct a scenario where those considerations aren't a factor so we can reason about ethics and moral intuitions without the greater complexity of real-world situations.
I think you have it backwards. This kind of reasoning is actually why people do thought experiments. If you don't consider anything else, then the answer to the original problem is an easy math problem - one person getting killed is better than five people getting killed. It's only when you start making considerations that the thought experiment starts to gain value as a tool. Do I have more responsibility for acting rather than not acting? Who are these people anyway? Why are they there? You start asking yourself these questions and thinking about how your answer to the problem changes with them and it helps you to understand how you (and others) actually make ethical choices in the real world, which is the whole point, in my opinion.
I disagree - I think that "reasoning about moral intuitions" is completely useless if you're attempting to reason about them in utter isolation and with the assumption that the subject is omniscient.
It's like economists assuming perfectly reasonable actors in markets, or physicists assuming a perfectly spherical object that ignores wind resistance.
They're toy problems that don't match reality at all, and the value is dubious at best as anything other than a very gentle intro to the subject.
Here's a thought experiment - How many people do you think would actually make the choice they state they will make if you present them the trolley scenario in real life with no warning? People who say they will pull the lever are fooling themselves.
1. They won't know how to read tracks
2. They won't know how the lever works
3. They don't know for sure that anyone will die: those five people might be able to move off the tracks just fine themselves
4. If they do pull the lever they're almost certainly going to get arrested or troubled by the legal system, because they fucked with shit and someone died afterwards (the courts won't give a shit that "they thought five other people might have died!").
5. For all they know the trolley operator can stop just fine, why would anyone be about get hurt?
6.... on and on.
Basically - you're setting up an impossible framework, the results (even if you get them) are useless because they're only valid in that impossible framework.
If the results of the real world never match the results of the framework you've set up, what is the value of that framework? It's just a shitty model with bad reproducibility. We have lots of those.
So what's your alternative, throw your hands up and proclaim ethical reasoning to be impossible, nothing is true, everything is permitted, and it doesn't matter what you do?
All models are wrong but that doesn't make them useless.
I much prefer: "All models are wrong but some models are useful". For example - Flat earth is a model that is both wrong and useless.
My claim is that the trolley problem is useless. It asks people to make a guess about how they would behave, but that guess is predicated on a set of initial conditions that are impossible to fulfill (omniscience is a bitch to get in real life).
How does gathering all that incorrect data help you? What ethical reasoning are you trying to tease out here?
Here's one you might love - "if the earth is flat and you reach the edge, would you jump?"
Now lets just categorize everyones answer to the that question.... and: Hold the phone! The earth isn't flat? It doesn't matter? It turns out this question has basically no relevance to anything!
I think the alternate is to not try to be a railroad operator in an emergency when you might make the emergency worse or take on huge liability.
If you are a professional trolley network controller then you have the judgement and the duty to operate/not operate the lever. I think that few people would question the ethics of a professional operator flipping the switch to save the most people.
The value is in training your ethical understanding so you can make better choices in real world scenarios. If you can't answer trolley problems and other thought experiments in a way that is ethically consistent, how can you hope to make ethical choices in the real world with all its complexity?
It's really very similar to the reason physicists use simplified models, and the value of physics, even when it assumes a perfectly spherical cow, shouldn't need to be stated.
I always took it that the point of the Trolly Problem was to demonstrate that moral problems can be difficult, or perhaps even inscrutable, in a way that confounds things like ethical clarity and consistency.
> When working in a new codebase, it's generally better to assume that something odd is the way it is for a reason, rather than changing it to something that seems right (easier to understand) to you.
But isn't that's just because when you find yourself in a new codebase it's generally because you will be in a long-term relationship with the other contributors to that codebase, and thus it makes sense to be cooperative? If you were dropped into a new codebase in a hypothetical scenario like the trolley problem where you only need to make a single choice without any expectation of ongoing interaction with the relevant parties, then you might very well just do the quickest and dirtiest change to the codebase to accomplish your immediate goal.
Because the real world is more complex than a lever, and there are many unknowns in how either physical or human manifestations respond to actions, and doing nothing removes your intent from the mix.
A very clear example of this is medicine with the "do no harm" principle - that the actions of the physician should be chosen to minimize the scenarios of harming the patient under any circumstance - under chance, under lack of patient compliance, etc.
Furthermore, the actions in the real world have also different experiences and meanings. Its easy to think about pulling a lever to kill 1 instead of 5, but not easy to think of killing one to harvest their organs and save other 5, though they are, with a lot of abstraction, "equivalent moral actions".
The trolley problem is usually phrased with a "bystander" making the decision, but is simplified to the point of ignoring most of what is important about being a bystander, in ways that makes people's moral intuitions look unnecessarily silly precisely because it's an artificially simplified scenario. In particular, typically bystanders in the real world are both numerous and uninformed. As a result, assuming that each bystander has a randomly biased estimate of the truth, if everyone followed the simple logic of pulling the lever iff they believe doing so will net save lives, the lever is likely to be pulled far more often than it should be, because the person with the most extreme belief will conclude that doing so is worth it. In the real world, it's typically the case that "action" is much more difficult to reverse than "inaction".
There are other thought experiments that elucidate this more clearly, like a soldier deciding whether now is the right time to fire the first shot to start a battle.
People's moral intuitions have heuristics that attempt to deal with this ("bystander effect") although they definitely can be poorly-calibrated in the real world, and are almost always badly calibrated in artificial thought experiments because that's not the environment they were designed for.
Personally I agree with you and it seems obvious to me to pull the lever. Here is my take on why there are people that believe it’s obviously the other way around:
There is a fundamental branching point of ethical systems between consequentialism (eg utilitarianism, which says the outcome is what matters) and deontology (which says basically that some set of rules exist, and ethical behavior equates to following those rules, no matter the consequence).
If you are a deontologist, “the ends don’t justify the means” and it’s rarely ok to just kill someone to save someone else. If you are a consequentialist then the choice to live in the better world is obvious.
Western morality, being heavily influenced by Judeo-Christian systems with Ten Commandments and books of God’s Rules, has a lot of deontological assumptions baked in (as does the legal code). So even with time to ponder the problem, “thou shalt not kill” will weigh heavy on many.
It seems overly simple to me to reduce it to "more is better". What if the five people are 100 year old dementia patients moaning in agony and the one person is an infant? Do you still pull the lever because "more people alive is better"? What if the five people are infants and the one person is an elderly convicted murderer, does that change how easy it is to pull the lever? In "more is better" neither shouldn't change your view, but they feel different.
> "(eg utilitarianism, which says the outcome is what matters)"
Speaking of the outcome - before you are involved, the person who tied six people to the tracks is attempting murder; if society finds the person it will punish them. After you pull the lever, should society try you for murder? For aiding and abetting a crime? Celebrate your rescue? A society where any individual can kill any other individual, if they think it is for the greater good, feels like it would be unable to hold together. The outcome of more people being alive but the destruction of society seems like it could be bad enough to outweigh the loss of five people.
It’s a thought experiment for studying ethics. You can modify all of the variables as you choose, but in general you want to pick the simplest form that suffices to demonstrate the point.
In this case making the people different would make the experiment needlessly complex. The problem as-stated (assuming identical individuals) already illustrates the consequentialist/deontologist conflict.
I think you are perhaps engaging on a different level than intended; “what are the legal consequences of this action” is downstream of the problem. In other words, we should make our legal system conform to our ethical system, not take the legal system as some fact that must guide our ethical principles.
This is not intended as some legal case study to test law students’ understanding of culpability in homicide. (Although that might well be an interesting discussion in its own context).
> It seems overly simple to me to reduce it to "more is better".
No consequentialist would claim this, and to be clear that’s not what I claimed either.
You said "If you are a deontologist, “the ends don’t justify the means” and it’s rarely ok to just kill someone to save someone else. If you are a consequentialist then the choice to live in the better world is obvious." and I took that to mean that killing someone to save five people was what you consider the consequentialist 'obvious choice'. From there you now clarify that the people are all the same so all we have to go on is quantity of living people, it seems. If "more people alive" is not what you mean by "the better world", then what do you mean by it?
You said “reducing it to more is better”, which is itself an oversimplification. That is not the principle at work here.
There are various utility functions that utilitarians might choose, for example hedonic (Mills) or eudaimonic. And various other assumptions you must make, such as “is the average life net-positive in utility or net-negative?”. Then you put your values into your utility function and assess the possible worlds and their likelihood of occurring. (For simplicity I’m leaving out the utility of having rules as heuristics, but worth mentioning as two-level utilitarianism seems appealing to me).
All else being equal, as in this thought experiment, saving more lives is better than saving fewer in my and most utilitarians’ opinion. But “more lives is better” is not the underlying principle at work in making the evaluation/decision. It’s merely the result of the calculation. One can easily construct thought experiments where more lives in existence would reduce utility and not increase it, in which case utilitarians would advocate for not saving those lives. For example if we modify the trolley problem to say pulling the lever will also cause the five survivors to be imprisoned and tortured for the rest of their lives.
(Look up “the repugnant conclusion” if you want to see a thought experiment where common intuitive morality breaks down under most systems).
There are metasocial reasons for this that are probably best illustrated by one of the classic variants on the trolley problem. Imagine you're a surgeon in a hospital. You have a healthy patient who is under for knee surgery or something. At the same time, five people come into the trauma ward from a trolley accident down the street, all of whom will die without an organ transplant. All of them are compatible with your patient.
What should you do? If you're going to take the view that failing to save a person you could have saved is morally equivalent to killing, then clearly you should kill your one patient to save the other five.
But think of the longer-scale effect this would have. Would anybody ever go to a hospital and get elective surgery knowing they might be killed to harvest their organs? A more general form of trolley problem doesn't necessarily present that exact dilemma with such clear consequences, but at any scale, think of the consequences of being concerned with what happens because of your inaction. Do you donate blood once a week or whatever the max frequency is at which your body can regenerate the red cells? Do you still have both kidneys? Do you ever spend money on anything except bare minimum shelter, enough grain to stay alive, and every other cent going to the AMF or whatever the maximum QALYs saved/dollar donated charity is? How many people have died in your time on Earth that could have lived if you'd done something different? If we all seriously made that a primary concern in our daily decision-making, virtually all productive action would be paralyzed. Everybody would be trying to become a hedge fund manager living like a pauper and giving all their money to the AMF, but the AMF would no longer be able to do anything, because everyone would be a hedge fund manager and there would be no one left to manufacture mosquito nets.
I guess because there's more wiggle-room with a non-action. If you don't pull a lever, you could argue there wasn't time to pull the lever, or that you were frozen with fear or whatever.
I think non-actions are judged less harshly than actions, so people are more inhibited in making them..
Clearly not. The vast majority of people responding to this problem usually want to pull the lever, murdering an innocent who would otherwise have lived through the day - an utilitarian perspective that holds that by default it's best to maximize the amount of lives saved. It's difficult for those of us that would argue that every life is valuable, and that, not being god, the bystander does not have the right to make this call. The bystander did not place the runaway trolley there, nor are they responsible for the people who are tied to the tracks.
I would ask the person who is (in most of these scenarios) on the diverted tracks what they want me to do. They have the right to make this decision, since they are masters of their own life.
I’ve always presumed you don’t have time to ask those sorts of questions in these problems. You have to decide that you’d let 5 people die so your action doesn’t kill one, or kill the one not knowing how they feel about dying to save five.
What I find…not surprising I guess, but sad is that while a large percent of people would pull the lever only a small percent of people would pull the lever if they were the one on the other track. People not being at least as likely to pull the lever when they are at stake breaks all basic moral principles.
I did notice that. I'd pull the lever if I was the one on the tracks, since in that case I am not robbing someone else of their life through my decision - my life is my own to sacrifice. At that point it's best to save five.
Then again, a sizeable minority wouldn't sacrifice their life savings for lives either, which means this portion of the respondents does not think the lives of strangers have value at all, or at least that they are not in any case responsible for lives other than their own. Which is interesting!
Right now, non-hypothetically, would you be willing to donate your internal organs to those who will die without them, even if it meant you will not survive yourself?
Not right now, no. That would be terribly inconvenient.
Jokes aside, I'm aware of my own moral weakness in this regard. The religion I was raised in - and I'm still a spiritual person - holds that suicide is the gravest of sins, but I don't have that high an opinion of my own life and its potential. The closer I am to the situation, and the more urgent it is, the more tempted I am. So I try to stay aloof, for the sake of those who love me, and out of my own instinct of self-preservation, which is a powerful force in most human beings.
If your mind was exactly like mine, and you were close enough to the situation to make the decision to sacrifice someone else for a group of strangers - and being tied to the tracks is pretty damn close - you would likely be willing to sacrifice yourself. But remember, someone must have tied me to the tracks in the first place for my life to be in danger. As I said in previous comments, in most cases I am, though inaction or stalling, attempting to save the life of the single person in the tracks - the person in the same position. I will not kill to save others, and attempting to not kill myself to save others is consistent with this position.
If that's unpalatable, what about have you donated one of your kidneys yet? You can live a healthy life with just one. If you truly believe you would sacrifice yourself in this trolley scenario, I don't think it is possible to justify not having done something where (excepting the slight risk of more serious consequences) the only sacrifice is time.
A perfectly moral person might? But I also think there’s a difference of kind in dealing with tragic one-off events (Such as someone getting run over by a trolly) and dealing with normal course of life problems (illness and bodily degradation over time).
To explore the idea being brought up here, what type of death causes are considered 'tragic' or 'one-off'? Is it the improbability of the cause which will influence the decision-making? If so, wouldn't how the cause is categorized/labeled directly impact our perception of the improbability of it?
For instance:
- Being run over by a trolley is a highly improbable event
- Being run over by a vehicle (higher category in which trolleys are a part of) is a less improbable event
- Dying of a very specific form of cancer which destroyed an organ is a highly improbable event
- Dying of cancer is less improbable
In the scenario for organ donation, what if the recipient had an improbable reason for needing the organ?
As far as my own views, I don't know if the tragicness/one-offness/improbability of the cause of death factors much in my decision-making. Though, I do think there's probably a good argument for reducing active suffering (that I haven't fully thought out).
There's more perceived wiggle-room with rationalizations, not actual wiggle-room.
The only moment in the scenario that matters is when you've made your choice and as others have said: it's binary. You pull the lever or you don't. The moral act is in making the choice, not what comes after. In that moment of choice you've expressed a hierarchy of your values.
Now, you might value most highly your own mental well-being, and making the most of that by rationalizing to yourself that you didn't actually make the choice that you just made. You might value your social standing or legal disposition the highest and make the choice accordingly. etc.
Of course, the scenario itself is a bit silly and not the kind of everyday living guidance that one will most likely have thought deeply about in anything than abstract moralistic terms. It's pretty rare to face that sort of scenario with that kind of consequence.
In real life scenarios there is a built in uncertainty with doing an action.
Imagine a scenario where you redirect the trolley to kill someone but the people you wanted to save would have survived anyway.
You could, but uncertainty always exists in the real world, so if it doesn't exist in the trolley problem then your choice in the trolley problem doesn't map to the real world and is not useful.
For one thing, in the real world if the trolley is close and moving fast then you have little time to think and observe and high uncertainty, but if the trolley is distant and slow then you have lots of time to try calling other bystanders, untie or rescue people from the track instead. So the more your choice is constrained down to just moving the lever or not, the higher uncertainty and less clear the situation is.
And in the real world it's more likely that a group of young people would tie some shop mannekins to a track for a laugh, or a film studio would tie props to a track, than that a moustache-twirling villain would tie some real people to a track. There is uncertainty in simply believing what you see; seeing five people tied to train tracks and about to die and being convinced they are real people, would have me thinking I must be dreaming.
Not my circus, not my monkeys. I don't think I can be held morally responsible for inaction in any circumstance. We didn't start the fire etc.
I answered "no pull" to every one except the one where the express goal was pranking the trolley driver (and no implied harm from pulling). This is apparently an unpopular opinion, but the only one I can reconcile with my own concept of responsibility.
I used to think like you until I learned about how ethical frameworks are different when you are being extorted. This trolley problem is a class of extortion because your choices are very limited and out of your control.
For example, it wouldn’t be wrong if someone forced you to steal a package of bubblegum or else they would kill your family and you decided to steal the bubblegum instead of inaction. In this case, it’s better to think of it as the act of saving your family and the extortionist caused the gum to be stolen.
Every day people go through this type of extortion-limited choice when, for example, a couple experiences an ectopic pregnancy where inaction would result in the death of the mother. You aren’t primarily choosing to kill your child, you are choosing to save your wife.
Your point is taken, but I want to point out that an ectopic pregnancy will never become a child (fetal death is guaranteed) and almost always fatal to the mother if not removed.
There have been rare, well-publicized cases where an ectopic pregnancy has been brought to term, but the 1/1000000 chance or whatever it is means it’s often not worth trying. I wouldn’t be surprised if eventually humanity has the tech to save these with artificial means.
Cool, now tell this to conservative politicians and evangelicals that still consider the removal of an ectopic pregnancy to be abortion and morally wrong.
It’s true it’s context dependent. E.g. the mother may only have 1 day of life remaining for other reasons, in which case saving the child might be preferred.
To pick an extreme but well-known thought experiment. You are walking past a pond, and see a child drowning in it. You glance around and there is nobody else nearby. You could easily jump in and save the child. It will certainly die if you do not.
If you choose to do nothing and ignore the drowning child, are you really not morally responsible in any way for the child's death?
If you are morally responsible for the child's death, you are morally responsible for basically every evil in the world right now, since you didn't do everything in your power to prevent all of them.
You may say that that is the case, but if you're responsible for everything, you may as well be responsible for nothing.
> If you are morally responsible for the child's death, you are morally responsible for basically every evil in the world right now
No, you are morally responsible for every evil happening right in front of you that you could immediately change with little risk to yourself.
For instance if you can't swim and the child is in the middle of the pond, I'd argue you aren't responsible because the risk is too great to yourself.
In fact due to the danger of drowning people pulling you under, I'd argue unless the child is in water shallow enough for you to stand in it's not your moral responsibility to save them.
Though in the situation, I'd probably feel compelled to save them anyway.
> No, you are morally responsible for every evil happening right in front of you that you could immediately change with little risk to yourself.
The problem with this is that you snuck a quantitative difference there and made it sound qualitative. How much risk is "little risk"? What if you could spend all your money and save N people from starvation? There's no risk to you, so are you responsible for their deaths if you don't do it?
What if the child is a bit less likely to pull you under? What if even less than that? Where do you draw the line?
You're never completely free of responsibility, there are just varying degrees.
Why not? Why does your presence imply your responsibility? In the given trolly problem, we can infer how events would unfold in your absence. Why does your presence imply they should unfold any differently? Shouldn't responsibility be voluntarily accepted, rather than imposed by default?
How can you impose a duty to fight evil when "evil" itself has no workable concrete definition?
I disagree. There are practical limits to what we are responsible for. And there are current obligations and responsibilities we have for ourselves. Just because we know of something does not make us responsible for it. Proximity and risk also play a role.
I am obligated to my family and to myself. To provide for them as an example. But I would also be able to fly across the world and feed a starving child, in theory. But my obligation to my own family, and myself outweighs that. There would also be risks to the journey. Consequences with those actions as well.
Is this true? I think most of us would say proximity to a situation (and ability to handle it) changes our moral Imperetive. That’s what makes the Trolley problem so… imperfect? It’s hard to say what it extrapolates to every day life, since it’s a situation that would probably never happen.
If you say that proximity does not imply morality, where are you drawing the line? Would family friends and job duties encompass it? Certainly you can’t say that helping your child implies you are responsible for the whole world.
> If you are morally responsible for the child's death, you are morally responsible for basically every evil in the world right now, since you didn't do everything in your power to prevent all of them.
That is my personal take on it. We are all living in sin, in reality we are all full of shit and have only a veneer of ethics.
Interesting! Do you disagree with the legal traditions that penalize doctors for failure to render aid, even when not at work? Or perhaps that isvlike saying you are actually the (or at least a) trolley switch operator, you just aren't in duty, in which case maybe your position obligates action when it is warranted?
Precisely. I am generally against compelled action of any kind, as it violates mutual consent, a fundamental principle I hold.
A trolley switch operator has explicitly opted in to the responsibility, and consents to same. A bystander has not.
This is why the famous internet video of the trolley problem acted out in the real world was required to offer post-experiment psychological counseling to the test subjects. They had not consented to being placed in a position of responsibility for life safety.
> compelled action of any kind, as it violates mutual consent.
So many of these questions are artificial, and it’s interesting how the legal system gets involved, but to a certain extent I think these questions are meant to describe a persons moral position outside of societies judgement of them. In many of these situations, it’s life and randomness that is putting these people into those situation, not producers of internet videos. I guess if you don’t feel bad your not morally responsible, but if someone was in distress I would feel _compelled_ to act.
I assume by compelled you mean by human forces but I can’t help but compare it to the notion that chance and ‘destiny’ often violate our consent, and compell us to action.
Is it conceivable for you to be placed in a situation where there is no "privileged default"? In other words, a situation where you must choose between two options and there is no option that you can somehow point to as the "no, I refuse to choose" option?
The main difference between action and inaction is that with inaction it's much less likely the other monkeys will conclude you are dangerous and should be killed (because they're not any worse off by your inaction, almost by definition).
> Legally perhaps, but morally, I've never gotten why so many people think that the physical act of pulling or not pulling makes so much difference.
As Henry David Thoreau said:
It is not man's duty, as a matter of course, to devote himself to the eradication of any wrong; he may still properly have other concerns to engage him; but it is his duty, at least, to wash his hands of it, and, if he gives it no thought longer, not to give it practically his support.
It's not your duty to fix the world. It's not even your duty to optimize the outcomes of the world as best as you could. The world is not in your hands. There was one trolly problem on this page where everything was blurred, but in the real world you don't remotely get clean problem statements at all, let alone clean outcomes. Not only are the outcomes profoundly unknowable, but the world is full of other people pulling their own levers!
To illustrate it, maybe he should have added one trolley problem where you were given the classic #1 description, but regardless of what you picked, the opposite happened. Or you got a random pick from one of the other people on the site. Or one where a third, entirely unexpected thing happened.
> It is not man's duty, as a matter of course, to devote himself to the eradication of any wrong; he may still properly have other concerns to engage him; but it is his duty, at least, to wash his hands of it, and, if he gives it no thought longer, not to give it practically his support.
Classic appeal to authority. The above quotation contains grand declarations with no supporting logic or evidence. Just because Henry David Thoreau said it doesn't make it true.
> in the real world you don't remotely get clean problem statements at all, let alone clean outcomes. Not only are the outcomes profoundly unknowable, but the world is full of other people pulling their own levers!
The outcomes of inaction are equally as "profoundly unknowable" as the outcomes of taking action. From an "unintended consequences" perspective, it's a wash. So we might as well focus on the first order known consequences, which are pretty clear.
If we're going down the road of appealing to authority, I'm partial to "The only thing necessary for the triumph of evil is for good men to do nothing"
I explained at length why I agree with the Thoreau quote.
> The outcomes of inaction are equally as "profoundly unknowable" as the outcomes of taking action.
Absolutely.
> From an "unintended consequences" perspective, it's a wash.
Indeed. That's exactly what I said.
> So we might as well focus on the first order known consequences
Sure. Or we might not. Either is permissible because...
> which are pretty clear.
That's where you've been lied to. And the site author absolutely should have added an example where the description didn't match up with what happened at all.
The trolley problem is specifically about a situation that is ongoing regardless of your presence.
You're not running the train, nor own the railway, you didn't bind the people on the rails. You just happen to be there, and can perhaps improve the outcome.
In case the worst outcome happens because of your inaction, the biggest moral fault lies on those who created the situation in the first place.
(If you're looking at a bank robbery and don't push the alarm button, you have some moral responsibility, but the robbers are the villains.)
When you take action, you become part of the situation and take full responsibility of the outcome.
So, no, I don't think the choice are morally symetrical. Taking action is morally a high risk high return strategy, while inaction is low risk mild returns (best case is you did nothing and that was the right choice, so you could have been absent, it wouldn't have made a difference)
>> if it does more harm than good, you are guilty of not choosing action.
True. Yet some large subset of people seem to highly value plausible deniability in their own head - they fear more having to think about the one person they kill by pulling the lever vs. just feeling like they can ignore the 5 dead because they did nothing, expecting that they can say that they had nothing to do with it.
Also seems like they can't recognize the inherent limitations of the situation -someone's already dead,you're just trying to make a least-worst choice,and I've seen many who fail to recognize such situations and just think that a good option must be available when it isn't, and that delusion ends up causing the worst outcome.
Morally is different but it's also the gist of the problem. Inaction makes you exempt because you didn't took part.
People choose inaction all the time. When we see a bully abusing a colleague... A homeless person on the street... A car aciddent. This is the metaphor.
Morality is an evolutionary adaptation to improve group cohesion. It is much easier to attribute blame/praise to active behaviors rather than passive ones, since it's plausible the passive person isn't there's a problem in the first place.
That said, if you do actually care about consistency and the consequences, and don't care about what others think of you, then active/passive makes no difference.
What does having enough time actually mean? I am almost certain that I would freeze if I encountered such a situation in real life, or panic and be unable to figure out how to flip the switch ...
... which brings up the real problem with these scenarios. In real life, the person at the switch would be trained. While that training is unlikely to include hypothetical scenarios created by philosophers, they would be expected to assess the safety of the act. This would put them into a position where the physical act of pulling or not pulling the switch would not, as you say, make much of a difference. They would have to answer for their actions down the line, even if their only answer was that it put them into an ethical conundum where they had insufficient information to assess the value of that one life verses five. A court may, or may not, accept that argument. It could be based upon the numbers (e.g. five lives are more valuable than one). It could be based upon the psychological impact of the decision upon the switch operator (e.g. a calculated sacrifice of one life may be frowned upon relative to an emotionally crippling indecision that led to the loss of five lives).
In turn, that brings up another issue with these artificial scenarios: they virtually always include follow-ups that are intended to raise doubt in decisions made. What if that one person was a child with their entire life ahead of them, and the five lives elderly people who have less to look forward to? What if that five included a "Hitler" plus four regular people, and the one was a "Gandhi" (i.e. the one slaughtered many orders of magnitude more people, and the one was revered for their regard of life)? Those are the sorts of things that are usually raised to justify indecision, even though they are highly unlikely.
I'm not convinced that's wrong, it's just that we need to accept we can't always act 100% in the best way for humanity. Rather than try to convince ourselves that our actions are somehow moral I think we need to accept we act out of selfishness sometimes and deal with that internally, otherwise we end up crafting philosophies saying that the hoarding of massive wealth is somehow moral just to try and stop the cognitive dissonance.
+1, and "Implied legal (and moral) liabilities" is just one of the issues if you're viewing the Trolley Problem as a real-world, real-consequences situation. (Vs. something from the land of make-believe, which is cool / interesting / empowering to sit around & talk about.)
For starters, real-world railroad (RR) switches are more complex than what you would understand from model RR sets, cartoons, old movies, and philosophy books. They are not binary, are often less than well-maintained, and may required upper-body strength that you don't have to successfully throw. The trolley may be going too fast for the track that you divert it onto, resulting in a derailment that kills everybody you were looking at - plus some more inside the trolley. Plus extra bystanders. Your well-intentioned passerby's understanding of which way the switch is actually pointing may be wrong. RR history has some famous (& deadly) accidents where an experienced RR employee misunderstood the situation, and threw a switch the wrong way.
Real-world, I certainly would not be touching the switch.
That's interesting. I didn't consider any legal liabilities at all, and would easily break the law in order to sacrifice one person in trade for five people. I also didn't consider doing "nothing", not pulling the lever was also an act, and I didn't "see" any options doing nothing.
Fun to see how differently people approach these ones :)
Actual law strongly discerns between action and inaction. For example the Polish penal code says you can only be criminally liable for a result of inaction only if you had a specific legal obligation to prevent it.
Would you kill your child to save two politicians you hate the most, one annoying youtuber, a neighbour who goes on your nerves for ages and two complete strangers?
> [I] would easily break the law in order to sacrifice one person in trade for five people.
This is a quite disturbing statement. So many psychopathic examples; If you were a doctor would you randomly kill healthy people with compatible organs to multiple patients? Or a researcher inflicting gruesome deaths to thousands of innocent healthy people to save millions later...
In fact we could simplify your statement further; The ends always justify the means., and so endorsing all terrorism and war, even genocide; whenever the ends is enough peace to make a net positive.
> If you were a doctor would you randomly kill healthy people
No.
> researcher inflicting gruesome deaths to thousands of innocent healthy people to save millions later
If it was guaranteed to save millions later, I'd think about it, if we remove the "gruesome" and "healthy people" parts. But "inflicting gruesome deaths to thousands of innocent healthy people" for the mere possibility of saving millions later? No.
It all depends on the context obvious, as life is never between two binary choices or black VS white.
I don't think it's so disturbing to wanting to help a bigger part. I (and many others) already do this constantly by protesting and striking. Strikes generally impacts people who don't actually have anything to do with what you're striking against (example: when doctors strike, it hurts sick people), but in order for you and your colleagues to make a wage you can actually survive on, it does sometimes justify the means.
> If you were a doctor would you randomly kill healthy people with compatible organs to multiple patients?
> > No.
Why not? It's clearly a different approach than your original to turn the lever to kill a person in order to save 5 other people. So what changed?
> researcher inflicting gruesome deaths to thousands of innocent healthy people to save millions later
> > If it was guaranteed to save millions later, I'd think about it, if we remove the "gruesome" and "healthy people" parts.
So you're willing to turn the lever to save 5 people because it's seemingly obvious those 5 people will therefore be saved (reasonable to me) and you see no problem killing 1 person, to reference your previous post, not only you would do that, you would easily break the law to do that! However, when it comes to killing thousands, in order to save millions, so the "net gain" is now not 4 but millions, and it's no longer the same order of magnitude, but it's a difference of 3 orders of magnitude, even if we remove the "gruesome" and "healthy people" parts, you will only "think about it"???? :D
> It all depends on the context
It seems to me you don't care about the context when it comes to killing one person, but once the stakes are higher and involve thousands, you hesitate. So I too find it disturbing, not just that you don't want to treat people individually and just consider the outcome of the trolley dilemma as +4 gain, rather than killing to save, but also the automatic disregard to individual rights creates a portrait of a person very dangerous to the society.
What about the standard but missing question. If there is no lever but someone you can push into the tracks to divert the trolley, would you? If there is no one to push, would you jump in the tracks yourself?
This is not the opposite here. The opposite is to not save lives of some people at the expense of taking life of another, i.e. the ends do not justify the means. It is a moral dilemma and whether you break or not break a law does not matter nor should it exclusively influence a decision here (I mean to do harm, just because you abide orders - that's immoral).
I was responding to someone who was generalizing, its not in any way pedantic to put a generalization to the test.
I'm also quite surprised that someone could imply overthinking as a negative on HN. Gaining a deeper understanding of something, even modern philosophy, is intellectually useful. You should not under-think things, it does you a disservice to skip over the flaws and paradoxes in the world.
I overthink a lot, I’m not against the concept. But I do need people to counterbalance that impulse of mine sometimes. So I’m just representing that side here.
Also, I think there’s a tendency in philosophy to end up in places you just don’t need to go. Zeno says it’s impossible to walk to a destination; Diogenes counters by walking to a destination.
Sometimes a trolley problem is just a trolley problem, not a side effect from some universal theory of everything.
A law that says “don’t pull this lever” or whatever is different than “don’t kill a healthy person and harvest his organs”.
>In fact we could simplify your statement further; The ends always justify the means., and so endorsing all terrorism and war, even genocide; whenever the ends is enough peace to make a net positive
On the other hand, you seem to be arguing that the means justify the end. So you'd let all the terrorism and war, even genocide, go ahead if it took not having to stain your hands to stop it
On the killing 1 person to save 5 lives example, it seems like your answer is "I wouldn't do it", which for the people actually being killed is equivalent to killing 5 people to save 1 person ( but what really matters is that you weren't the one personally killing the 5 people )
It can help to understand why some people think inaction is an action by abstracting the problem such that you can model it in a computer using standard decision problem algorithms. When you do that you have an initial state and you have two edges and they lead to terminal outcome nodes. The choice of which edge to take is an action. This isn't (usually) controversial. People generally call the do nothing action an action in these circumstances: game theory extensive form will call it that, as will Q learning, and often you'll see the same framing even in things like the expecti minimax algorithm though sometimes they'll prefer to call it a move.
In my estimation, inaction only seems to not be an action, because people are afraid of the idea that they are causally connected with outcomes in which things are killed - they want to avoid tit for tat game theoretic follow on concerns that aren't part of the game definition but might be implied by it. I think for myself, I discount this concern, because I assume policies of mercy are preferred to policies of judgement - the world is so complex and hard and some err isn't just reasonable, but inevitable. Not allowing for mercy in response to harm from decisions seems more likely to trap in tit for tat. So intent ends up mattering to me much more than the outcome when I consider the follow-on games that are implied.
I can either pull the lever or I can not pull the lever. Choosing to not pulling the lever when I know I can save others, would be me making the choice to not saving others.
> Implied legal (and moral) liabilities made a difference in my choices.
This is always my problem with the trolley problem. Are we supposed to take second order effects into account?
For example, if I destroy the trolley that is killing 5 people over 30 years with CO2, does another trolley get built to replace it?
Or ‘kill 5 people now or send the trolley into the future 100 years and kill 5 people then’. Is it 100% guaranteed the trolley kills them then? Or can I assume there is a tiny chance they figure out time travel and can make preparations?
Or the ‘stuck on an eternal loop’. Does this mean true eternity and are the people in the trolley immortal? Or just for 70 years?
I think the trolley problem is not about simply getting the right answer. The problem is about figuring correct answers for various circumstances, defining those circumstances, discussing them and improving one's and others understanding of morality.
Ideally, the problem would require two possible actions, and not allow inaction. Two possible avenues for such a formulation are
1) The lever is in an intermediate state, that will cause the trolley to set into effect some global catastrophe, and therefore must be pulled in one of two directions. This may be a rather unconvincing scenario.
2) Have two trollies, each running down their own track, and the operator has a choice which of two levers---only one of which can be reached in time---to pull to detour a trolley to a harmless side track.
I had to put the "legal repercussions" concept aside for this; because otherwise I'd never take an action that would involve someone being killed. No matter what the "if I did nothing" option was, you'd be in for a world of hurt in the courts if you took action that wound up killing someone; even if it was them or 1,000 other people dying. Someone would take you to court.
I take exception to the idea that the legal choice implies it being the moral one. Legality seems to me to be more of the bare minimum expectations and actively avoiding doing the wrong thing (don't kill people) rather than doing the right thing.
If legality is supposed to be based on morals (not sure how many people agree with this?) then wouldn't basing morals on the law be a circular reference?
Yes. E.g.: I didn't choose to kill the lobsters instead of the cat. I chose not to act. If they'd been switched, the cat would have been greased instead. The statistics are presented after the fact like you had a preference for one over the other, when in most versions of this problem, I was deliberately choosing not to participate. I refuse to be complicit in this evil.
I maintain that the real answer to the trolley problem involves an overwhelming personal struggle that is so traumatizing that it transforms the hypothetical subject into a vigilante -- a hero who devotes themselves to carrying out personal justice against the nefarious evil-doers who keep setting up these trolley problems. The problem is the Joker, ergo the solution is Batman. Dismissed!
I've thought that too, and it's hard to believe you really could convince yourself that your level of responsibility/guilt for the outcome is significantly less in the case you choose not to intervene even when you easily could. If the lever was some distance away (and certainly if other people were just as able to reach it on time) then not running to and pulling it seem to be a less culpable choice.
> But what if you put the Do Nothing choice on a timer?
This is actually done in the game Dr. Trolly's Problem[0]. This technicality is interesting enough such that a streamer creates his own internal rule around it to handle issues of morality[1].
This gets at what is, to me, an extremely important distinction between "killing" and "allowing to die."
Death happens to everyone eventually. No way around that, at least not yet. The most I can do with a trolley switch is possibly affect the timing.
It seems to me that any time I pull the lever to direct the train toward a person (as opposed to toward other trains or lobsters or money), I'm causing the death of whoever is on the other track to happen sooner. I'm killing them.
But if I don't pull the lever, if I don't intervene, I'm allowing death, which was already coming, to come to whoever's on the original track.
As a principle, the distinction between killing and allowing to die really starts to make itself felt when we're talking about the difference between, say, turning off a respirator to allow someone to die, versus actively euthanizing someone by administering medication to stop their heart, even though in this case, both involve an action.
To me, those aren't the same act. There's no moral obligation to try to extend life as long as technologically possible. Death comes and that's OK. But there is an obligation not to cause death, and an obligation not to pursue it as its own end.
I'd imagine that turning off someone's life support (respirator) is akin to killing them, but something like a Do Not Resuscitate (DNR) order would be allowing to die.
It kind of depends on whether you are some innocent bystander, or some employee of the trolley system who is actually trained and certified to pull that lever to divert that trolley and tasked with minimizing the loss of life.
Given that the way the problems are presented as if the decider is an innocent bystander, perhaps inaction is the most sensible choice.
Even if someone isn't yet brain dead, if there's no hope of recovery and it's just going to be a long slog of supporting more and more organs artificially until either the brain gives up or we hit an organ we can't save, I have no problem with allowing death to come because we stop supporting at least one organ, without which the person cannot survive.
Ultimately the mechanic who was responsible for maintaining the brakes, but you as the controller of the switch have an illusion of free will that causes you to act accordingly to your predetermined nature and upbringing.
I think it's a problem that invites our cultural biases more than we realize. If you see a baby falling from a second floor, and you can easily catch it but don't because you dislike babies, is the baby's death your fault?
Different cultures would probably have different answers to this question.
The problem is to pick a timeout long enough to allow the player to at least read through the description and understand the situation, but not so long that players gets impatient and pick the “action” because they don’t want to wait (and there are no real stakes on the website). A timeout could distort the results more than the current version.
After the first level you’ll know how it works, and then I don’t see much of a difference to the current version, except that you lose those players who are annoyed by the four-second wait.
Would be great if you had to actually pull a lever and hold it, being able to release it at any time. The cartoon figures' faces change depending on whether they're currently in danger or not. Then see how many people change their mind mid pull like with the best friend or the cat/lobster one.
As long as the trolley moves, you can switch back.
While I have mixed opinions on the game (literally mixed, i.e. both good and bad, not a euphemism for bad), it provides more uneasy emotion than any other trolley problem game I played.
The "bad" part is that it is a paid game, working only on Windows, while at the quality of a Flash game from the 2000s from Newgrounds. The good part is that it is at the creativity level of a Flash game from the 2000s from Newgrounds.
I've played some Japanese games where indecisiveness is represented as inaction and is one of the option, first time realising it definitely upped the ante
Ok, this is really interesting, is there some UX where inaction itself stops being the norm?
For example 2 buttons, with a timer to the side, which when it hits 0, rolls a dice and picks a random outcome.
Does the above now make someone want to choose?
If not is there any mechanism that does not forcibly require a decision (ie just stating you must make a choice or place the chooser in a situation where a choice is a functional requirement, such as you're in a locked room and cannot leave until you press a button).
2 mutually exclusive buttons/levers + everybody dies if you do nothing. This way inaction is still an option but strickly worse than each of the actions/buttons.
Another point is that a lot of these scenarios are easy for us to think through, but to be in the actual situation and physically push someone/something and potentially see someone die would make a lot more of us indecisive.
I only find this true when there's someone I know at stake. If the choice is between 1 or 5 random people, I'm going to save the 5 people 100% of the time. If that 1 person happens to be a family member or best friend, that choice is going to be a lot harder and potentially different in the actual situation.
And like why is the trolley about to run over five people? And why can't you personally jump in the way? Who designed the train track? Same person that wants you to scapegoat some fatty?
Level 20 was weird. It was a choice between letting a trolley run as normal ("emits CO2, kills 3 people in accidents over 30 years") or running it into a brick wall and decommissioning it. For some reason more people picked the latter. So, they just dislike public transit? What about the emissions and death rate when everyone switches to cars instead?
Goes to show how easily the context and exact wording of a question can sway people's opinions.
If you included "the trolley actually exists in a realistic world and is part of a public transport network" then sure, I won't decommission it, but trolley problems are weird zero-context questions about trade-offs and I assume they fully describe the consequences of choosing each leg when answering.
It doesn't help that actual trolleys are called trams where I live, so I think of trolleys as philosophical constructs that don't have a real existence.
They aren't zero-context; all the value that you attach to a human life is context. If you don't grant anything else, you are just choosing one cartoon line-drawing over another. Certainly you are free to look at it this way, but it isn't really a useful point of view in a discussion about the hypothetical consequences.
But they're not... real. What someone says they might do in a trolley problem situation vs what a person would actually do I would assume basically have nothing to do with each other.
Actively making a real life or death decision is quite a lot different than reasoning though a game.
Certainly the simulation and the action in the real life has some connection. It just a matter how strong the connection is. I guess no one will test it though, at least that is what I hope.
But it's problematic to assume these problems are in a 0-context vacuum because that would mean your actions don't actually have consequences. If the world ends immediately at the end of the experiment, then your choices in the experiment don't have any meaning, imo.
However if you're in a plane nose-diving vertically, shooting someone a second before the plane hits the ground (in a way that obviously is going to kill everyone), hardly has any moral relevancy. It's taking 1s of life from someone, probably comparable to smoking a cig near someone. More so, this is a low-quality second of life, with all the stress coming from the awareness of imminent death.
So the argument still stands, the difference in the consequences, when considered outside of any context, is simply not worth the mental effort to figure out if the lever should be pulled. The exercise is only interesting because it provokes to think about real life scenarios.
Right but if you ask someone whether that situation is murder I bet most people would say yes. And I bet they would apply the same logic to someone on death row or with a terminal illness.
> It doesn't help that actual trolleys are called trams where I live
Does anyone really still call them that? A trolley is not a tram, it's a trolley, and harkens back from last century. In the US, some people call them trams, most of us call them trains. Portland does have a special version they like to call a streetcar, though. Functionally a light rail train, but runs in normal lanes of traffic.
> It doesn't help that actual trolleys are called trams where I live
Yeah, when I clicked on the link I was hoping that finally someone had solved the problem of never having a pound coin to unlock trolleys at the supermarket.
> For some reason more people picked the latter. So, they just dislike public transit?
I mean, it seems like it's the same trolley that have run over a lot of people for the previous 19 levels, why wouldn't I want to decommission such bloodthirsty trolley?
Isn’t the trolly also reducing carbon emissions by killing so many people? With enough of these questions, it will be able to reduce the world’s population to zero, at which point there should be no carbon emissions from humans.
You can run the trolley into a giant vat of molten glass that would pour over all the dead bodies of humanity and freeze them from decomposing into the atmosphere. (note that the energy used to create this large vat of glass was from renewables)
The gist is that we design our streets in such a way that people drive dangerously fast on them when they’re not slowed down by congestion, and during the pandemic that’s exactly what happened. Some other countries (particularly in Western Europe) take a safe systems approach to street design and did not have the same uptick in fatalities that we did.
Thank you for the interesting article. I started skimming it after a little bit though and my eyes happened to pick up this line out of context. What a name to go with that quote:
"“They were just sick and tired of kids being killed in the streets,” says Jason Slaughter"
Keep existing public transit system that contributes to X deaths per year from pollution an accidents, or...
...pull the lever and...
Replace it with a public transit system of bloodthirsty trolleys. There is zero pollution! Only 1% of X deaths per year from accidents! Just one catch: all the accidents involve the trolley morphing into a scary mechanical beast and biting a random person's head off.
People also cause pollution, so it might be less polluting in the long run to keep the trolley.
But more seriously, I just realised that this is actually an argument that's often used against various environmental measures. People argue against wind turbines because they kill birds, ignoring the fact that pollution from fossil fuel kills far more, and birds are killed in larger numbers by other causes (cars, windows, cats). But somehow inaction seems more moral to some people if that action still leads to some deaths. Same with a multitude of other social issues.
If I remember correctly the original trolley problem was actually two problems. One was "pull the lever to kill one person instead of five" the other was "push someone onto the tracks (killing them) to stop the trolley and save five people".
More people were in favor of pulling the lever than there were in favor of pushing someone onto the tracks. This indeed hints that the more direct action, the harsher the judgement.
Yep, the original is "pull the lever to kill on person instead of five" and there are many many more variations with more direct action. The pure utilitarian will still choose the "kill one instead of five" no matter the situation. But some variations come up with things like: "you are a doctor with 5 patients who need 5 different organ transplants and are going to do die, do you kill the hospital janitor to harvest their organs to save 5 people"
My logic on that particular question was that someone would probably get a new trolley to replace it (trolleys are insured, right?), no matter what I thought, so I let the trolley go.
But not your pool pump. If you have an old pool pump then you should definitely consider replacing it with a modern one as they function better, more reliably, and use far less energy.
with a home, thats not universally true, if its pooorly insulated for example. trolleys, sure, they are all electric, so you can go down to zero emissions if you use green energy sources. Producing them though, metal production still runs with coal today just like in the 19th century
Even if your home is poorly insulated it would take a long time to recoup the (direct and indirect resource) costs of a new home with better insulation.
Taking into account future discounting, it's probably almost never a good idea.
> However I did not take into account the CO2 emissions and possible industrial accidents at the trolly factory or trolly materials mines.
I would argue that it shouldn't factor into the equation. The polluting trolley will be replaced sooner or later, anyway. You're just moving the date forward.
The catch is that the argument could be made every year, with a new, more efficient trolley replacing the previous. In which case the construction costs obviously need to be considered, and it wouldn't be worth to spend 100 CO2 to save 10.
Whether it's public or private property, the trolley is really not mine alone to destroy. We need to decide, together what to do with it. Maybe we could stop it, and put it in a museum. Maybe we could change its source of power and make it a green trolley.
My feeling was, if I let the trolley go, it can be destroyed later. It will be harder to undo the decision to destroy it. Who knows, maybe the transportation capabilities it offers can save lives by rushing people to hospital. I just don't believe that pulling the switch and destroying a vehicle is really the only possible opportunity to stop emissions that will get released over 30 whole years. So I didn't pull it.
Basically same for me. I wouldn't actively go out and destroy someone's car to reduce emissions. I'm not going to take an action to destroy something for the sake of making up for the shortcomings of our collective global leadership
I always assume trolley problems are in a vacuum without context. Otherwise for every one of them you have to ask, "what is the background of each person on each track" to make a proper choice.
I assume the information they have told me is the only thing relevant and everything else is equal.
In philosophy Kant would disagree with you with his Categorical Imperative[1]. Rawls also went that way, with the veil of ignorance[2]. The lack of state or context is not only desirable but is the whole foundation on why their theories are rational.
Of course many philosophers disagree and arguably terrible things were done with such mindset[3]. The point I am trying to make is that if you need context then you cannot have a universal rational decision, thus you will be imposing your own morals possibly over others. You might be fine with it, but now you relativized morals.
PS: I highly recommend the Great Lectures on Audible on philosophy. Specifically "Why Evil Exists" and "The Modern Political Tradition: Hobbes to Habermas".
The first one is more on the religious part, but provides great background on moral and ethical thought starting with the Gilgamesh all the way to XX century psychology.
The second is a great over view of western thought, with philosophical counter arguments on thinkers I never heard of, but which i find essential to help form one's thought on how Western society must progress. That book changed my life due to the sheer exposure of captivating ideas even sometimes conflicting. I loved Rawls but consider myself a freedom guy. Now my thought is clearer: How can we find a way to measure that further distribution would harm the most disadvantaged? This is still open but a more practical question, that i can use on evaluating certain concrete policies. Example:
In Portugal the minimum wage has been increased to the point it matches the average salary. A person who did not train to specialize and works in a coffee shop earns similarly to an engineer that spent his first 24 years studying. This is harmful because without differences in rewards, higher difficulty but needed professions will not have enough practitioners. Therefore society as a whole will suffer because it lacks trained specialists to improve the untrained coffee worker's life. "Bam, simple as that". It soothes me in the face of the torrent of events and ideologies pushed upon me all the time.
I think it’s certainly a myth that an engineer’s profession is “inherently harder” than a coffee brewers.
In my opinion, people who would like to study engineering would do so not so because of what they are paid (the situation today), but because of their genuine interest.
On further reflection, it seems ludicrous to believe that that hardest professions are the most paid. The average CS job pays obscenely well, but is relatively trivial. Primary healthcare providers, essential services workers and so on are severely underpaid.
It’s not a myth for those that have to study 16 hours a day to understand all the intricacies of building structures with safety factors. The integrated corpus of engineering knowledge is just far greater than that of coffee brewing.
Of course, that doesn’t mean that the hardest professions are the most paid.
I specifically mentioned training because I knew this answer would come, and like clockwork it did.
Training takes time, and often unpaid. A coffee shop worker will immediately get rewarded while the engineer will not. If they will be equally compensated then the engineer is at a disadvantage.
> In my opinion, people who would like to study engineering would do so not so because of what they are paid (the situation today), but because of their genuine interest.
Marx said the same. If the people would be free they would produce out of their heart's desire. As it did not turn out that way the party needed to force the worker's desire into them. Literally to free them by force (freedom by force is another concept in itself :)).
Fair argument but I don’t see why training should inherently be unpaid.
In fact, I don’t see why the ability to have a good standard of living should be predicated on your profession in the first place.
The fact that engineering training is… unpaid as you say seems to put people who cannot afford to not make money for rent, food, etc. at a clear disadvantage. Those already rich (and able to afford unpaid training) would just get richer if they are paid more than an “untrained” worker.
> Fair argument but I don’t see why training should inherently be unpaid.
Because the lack of payment is the trainee's quid-pro-quo. It makes the trainee a counterpart in the training investment. In several countries in Europe to varying degrees and forms, the state pays for the training. It pays the professors, facilities and can even pay scholarships so that a disadvantaged background is not an impediment to such training(tackling the inequality issue you mention). I am not aware of any institution/state paying even a minimum wage up to master level.
Then there is the issue of difficulty, where it is clear that learning to serve a coffee mug requires less effort than learning advanced calculus. If you pay/reward the student engineer the same as the coffee waiter, you will likely still have fewer engineers[1]. Fewer engineers will produce fewer useful infrastructure or wealth to redistribute that the coffee waiter could benefit from. Therefore according to Rawls having them earning the same would be unfair.
> In fact, I don’t see why the ability to have a good standard of living should be predicated on your profession in the first place.
It is not. It is predicated on the the value the profession generates.
[1] The reward/base-line ratio feels like the reaction coefficient, where you use a higher reward to make an unlikely reaction become possible :)
PS: It took me a good while and rewrites to come up with a satisfactory answer. Thanks for the comment.
I don't see how minimum wage is the problem. Engineers aren't better compensated in other markets because of low minimum wages but because they are expected to deliver more value. The Internet tells me that the minimum wages in Portugal is 823€ a month. That isn't a lot compared to many other markets. Sounds more like a productivity problem. Or that someone else is capturing much of the value.
High minimum and median salaries are usually good for engineers as it requires more productive companies which requires more engineers. When workers are cheap few wants to pay for expensive systems.
Of course not exactly but if most of the people earn the same minimum, then the average will be close to minimum. There is no cap on max, but with few they do not make a difference.
No, the economy will just become uncompetitive for engineering companies and produce more coffee-shop workers. Given Portugal's economic dependence on tourism is perhaps by design.
Yeah, that one made it clear how much context matters. These problems pretend there's no context, but we always look at them with some subconscious context in mind.
In this case, I did pull the lever, because it could be replaced by an electric trolley. But if killing the trolley means more cars and more car accidents and CO2, then of course we should keep the trolley. But if we ignore context and just look at kill 5 people or kill the inanimate object responsible for killing those 5 people, the answer is obvious. (But did I really ignore the context there?)
Others also have important subconscious context. Sacrifice 5 elderly people to save one baby. How elderly are they? Are they so elderly they're just waiting for death, or do they each still have more than 20 years of relatively healthy life left? And thanks to modern health care, that baby now has a very good chance to survive until adulthood, but 100-200 years ago, that chance was only 50%. Maybe not worth sacrificing 5 people for. I think that one was the hardest.
(My friends may be somewhat worried to learn that I had surprisingly little trouble sacrificing my best friend for 5 strangers. But really, I'd prefer to kill whoever keeps tying these people to the track.)
Eh, it doesn’t matter how old the people are, really. They have all already gotten to experience the joy of life. The baby hasn’t, so they have the most to lose.
It does matter. Five young people have been fully invested in & will now produce value for the next few decades. Whereas a baby has still not been fully invested in, years of education etc, so they're cheaper to replace
the baby (young enough) doesn't understand the situation
also, you can easily flip it: elderly cannot be saved from the torture of existence, having lived so long, but the baby has barely suffered anything yet - we can save it!
The elderly also have wisdom to offer. The baby only has potential wisdom. Of course, if you're trading on potential, the baby could potentially grow up to be a serial killer. Or the elderly could potentially be mute.
Easiest one IMO. Spare an evil baby who would gladly sacrifice every single person on this planet for some ridiculous thing they feel entitled to or spare five diligent elders?
Right, you can run it in to a brick wall, ending the CO2 emissions and its usefulness as part of a transit network, and also, of course, depriving its owner of a valuable asset (the trolley)
The results on all the questions implied to me that most people answered more as if it was a quiz than a philosophical question. That is, they'd tried to select the most logical/utilitarian answer based on the wording of the question rather than necessarily thinking about what they personally would do in that situation, or any wider context that may exist.
Actually streetcars are kind of a bargain because they use mostly existing infrastructure. But that’s the downside too, because they can easily get stuck in traffic. The LA Red Car system died in the end because without its own right-of-way, it was worse than cars, less convenient than buses.
Then give them their own right-of-way. Amsterdam trams go through the streets, but you're not supposed to block them. They often have their own lane (though frequently still shared with buses and taxis), and when they don't, it's because the street is too narrow, but that's usually only for a limited section of the street.
You still need tracks and power lines. If the only benefit of streetcars over buses is right-of-way, why not just give buses right of way.
After some deliberation, my muncipality recently decided against a streetcar-system and for a bus-based system with electric buses, right-of-way everywhere and dedicated bus-lanes where possible (nearly everywhere).
Only to decide to recommission them in many places, increasing the heat-attack rate for those trying to get to work on time through the 6 year long construction zones.
From my point of view trams have no advantages over buses. On the other hand they need lots of special purpose infrastructure and often condition road traffic rules terribly, like forcing traffic lights instead of roundabouts. Actually I am open to hear the advantages of trams.
Running on rails is super efficient once they are in place. Less resistance, less wear and tear. For high-frequency routes, they really are great. But for new routes, trolley buses are probably a lot cheaper to set up. Maybe use trolley buses first, and only switch to trams once it's clear that this really is a high demand, high frequency route that's not going to go away.
Still, I feel that trams will be dying soon. The improved efficiency is simply not enough to justify the extra infrastructure and inflexibility. Modern battery electric buses have a range of 500+km and can thus be used for an entire day, and charged at times when electricity is cheap and green (ie overnight on wind energy).
Battery electric buses require batteries, which may be flexible, but are also expensive.
I don't see existing trams and light rail disappearing soon. Not in cities that have well-functioning networks, like Amsterdam. But I can imagine that for new projects, electric buses are preferable.
One of the major problems with anything that isn't on rails is that it is easy to make go away. Unironically one of the reasons for trams is that it's harder to rip up the tracks and delete the service entirely as opposed to bus routes which can vanish in a single day or be altered in incredibly dumb ways. Of course, if you're sufficiently destructive you can rip up any infrastructure.
They are far more comfortable than buses. Larger too. If you rely on public transit, then your city switching to trams for the highest frequency routes can be a significant quality of life improvement for you and the many thousands of others who will use them every day.
That isn't their only advantage, but it's one which is often overlooked.
This is a natural response, but I think it also misses a good opportunity. Philosophy setups like this are always massively unrealistic, but the same is true for almost all physics problems, especially so for first year undergrads. We usually accept the latter as, nonetheless, important pedagogical tools.
If you play along with the isolated premise (without adding in "nuance") of these Trolley Problems, I find they can act like little experiments on our moral intuitions that can illuminate one small, isolated facet of the full complex moral machinery. It's likely a bit like Michelson-Morely, though, where it takes work to digest the experimental results into a useful model of the underlying mechanism.
In that light, it's also interesting to start adding in the nuance you mention, piece by piece, and see how our moral intuitions change, vacillate, and even give simultaneous conflicting answers.
Agreed, they don't say anything about replacement trolley, maybe the new one will kill just slightly less people, maybe they won't replace it at all, way too many factors to consider and I am not really management of transport company to make these decisions.
My dad never grasped thought problems either. Same for riddles. He would always try to find a crack in the description or wording that he could exploit to find a solution you didn't expect, when in reality it just subverted the thought problem, transforming it into a different one that was easier to answer, but not useful to anyone.
If the choice was between 3 deaths over 30 years and something more than that, then the problem would have stated as much. But it didn't. You don't add extraneous details that you thought of to bend the problem to what you want it to be, you make the choice between two sides as they are presented.
I assumed that the trolly company would need to clean up the mess, with lots of construction equipment, and would replace the trolly, incurring the cost of building a new one. I guessed that the CO2 emissions from that result would dwarf the emissions of the extant one, so I left it alone.
Or people walk instead, emitting no carbon and killing zero people...
This is what's fascinating about the Trolley Problem, and philosophy in general, and how it applies outside of philosophy - most people struggle to answer questions based on the evidence available without bringing in some external justification for their answer. People wamt a 'logical' reason to justify their choice. They can't say "I don't know", or "I used the limited data I had" when the choice they'd make goes against the something they belive about the real world (eg 'public transport is good'). They bring in a "but what if <imaginary thing that supports their biases>!" and make a decision based on that instead. They even believe they made the right choice.
This applies to everything from choosing a tech stack to picking who to vote for. It's infuriating once you see it.
I prefer the emissions and death rate of people living in their own mostly self-sustaining little village, within walking distance of everything they need, and not needing to constantly travel around to other people's villages.
I was one of them: Somewhere around 10-15 I got bored and just wanted to see all the scenarios so I was mostly just going "Do nothing.. Do nothing.. Do nothing.." for the rest.
I hope not all the people took this seriously, hence (probably) some of the answers.
I knew I responded with "do nothing" to 100% of the questions, it's not my job to solve the problems of the Universe (and if that good person had been that good in real life then he/she wouldn't have ended up in front of an incoming trolley while the bad person was tied down on the other, "take action", line).
We’re talking about some damn cartoons. That’s the whole “philosophical” point of this trolley problem, which can be better described as a sadistic game, i.e. that we shouldn’t let this type of nonsense get between us and our real “inner values” (for lack of a better term), we shouldn’t let the people who “apply” this game condition us.
Of course this is a satirical game that should not be taken seriously, but only because at some point it starts making you choose between 5 lobsters and a cat.
The point of this type of questioning is to know how much moralist vs utilitarian you are.
It makes you question what are those inner values, where they come from and if they are rational.
i wouldn't do it to a real world trolley as it would cost a lot to replace something that is better in many respects to the alternatives but i wasn't taking it too seriously and they only gave the trolley negative traits within the question
No one likes public transport, but people like motorized traffic even less. It's a relative choice.
The option should have been between decommissioning the train, saving 3 people and a few tons CO2, and decommissioning all motorized vehicles, saving millions of people every year and having a real impact on CO2 reduction.
"No one likes public transport" - if by that you mean almost anyone would prefer to have forms of transport that don't have all the disadvantages of PT (being crowded, limited routes, scheduling/timetabling issues) then sure, but it's also true almost anyone would prefer to have forms of transport that don't have all the disadvantages of private travel (cost, traffic, parking etc.). I'd still suggest there are plenty who'd choose much better public transport over much better private travel options. (Disclaimer: I'm in the latter camp - I get myself almost everywhere by bicycle where possible).
In Germany, a lot of the time, public transport is awesome. I love public transport here. You can always find something to complain about, and of course there will be exceptions, but compared to other places I've been to, public transport here is available, useful, and clean.
I have a anecdatum of one, that being myself, that I like public transport. Just, the US public transport is god awful and you may be thinking of the US style of public transport, or the lack thereof, and thus come to that conclusion. Which, fair enough. Just not representative of the wider world.
Whenever you visit a web site, that's tied back to your device and your name / identity. Data brokers have browser fingerprints. You have lots of CDNs hosting content and third party cookies. You're not anonymous online anymore.
Many "fun" surveys are built to help collect data to better profile you. There are many techniques here, from asking a mixture of innocuous and profiling questions, to looking for correlations. With a few, you can extrapolate a person's age, political leanings, etc.
This is snowballing, and it's hard to predict how data collected today will be used, or what inferences will be possible. It's safer not to do a threat analysis every time, but just to click one button each time and not risk letting profilers know your value system.
>Oh no! Due to a construction error, a trolley is stuck in an eternal loop. If you pull the lever the trolley will explode, and if you don't the trolley and it's passengers will go in circles for eternity. What do you do?
50% of people pull the lever?!?
I hereby declare that if I'm ever going to be stuck in a trolley for my entire life, I do NOT want the lever pulled. Toss me a smartphone charger and my life wouldn't even be that different, day to day.
I interpreted it as the passengers would go round in circles for eternity, but not necessarily living passengers. I assumed they would die of hunger and dehydration which seemed less humane than instantaneous death by explosion.
The year is 2199. The trolly residents have figured out how to tap the trolly's motion for near-limitless power. Despite its small size, the trolly has developed a stable population of 24 (give or take over the years) and has dubbed itself Trolland. Their primary exports are electricity via pantograph back to the mainland, information services, and entertainment syndication, as the trolley offers a 24/7/365 reality tv experience akin to the Truman show.
The Trolland Show draws millions of views. The residents initially had relatively normal interaction with friends and family via Zoom calls, but as the first generations died off, a new social order evolved. The community is very tight-kit and intimate (in every sense of the word), while having parasocial relations with outsiders, whom they call Stationaries. A Cult of the Trolley which worships the eternal engine has sprung up, periodically giving offerings to the trolley and its denizens. Ratings skyrocketed one day in 2077 as five cultists, in an odd sense of irony, tied themselves to the track as a sacrifice to the great Prime Mover.
On a serious note, it feels worse in every case that the end of your life is in the hand of an external actor.
Best case scenario you can do what you want and deal with whatever urge you have at your pace. Worst case scenario you don't have agency, and just stay stuck suffering instead of dying. Personally I'd take the chance.
My response, to leave them there was based on 1) not taking lives as a result of my decision, and 2) if they're alive, they might eventually find a way to stop the trolley and get off. My choice gives them that chance, in my opinion.
Would it change if you were stuck on the trolley with the type of people that seem to occupy my city's transit now that fare enforcement no longer happens and no one is asked to leave the trolley at the end of the line?
> stuck on the trolley quite literally for eternity
What "literally" means in this case? Am I completely certain that I'll be stuck on the trolley for eternity? I would think that I'm having a psychotic break and delay my decision.
Literally, as in, if a passenger finds a way of getting out of the trolley somehow, they will be put back in so the thought experiment can continue working as intended.
Absurd amendment to an absurd trolley problem.
If it were truly the one single opportunity I will ever have to choose to stop existing, it's a tougher choice for sure.
But on the other hand, I might be so curious about what in the heck happened in the world outside - the world that led to this universe where it will be impossible for me to die for eternity, and somehow I know this with absolute 100% certainty - that I still opt out.
Well, perhaps it is not possible to die in this universe, and there are several takes on this idea, see e.g. quantum suicide on wikipedia, or perhaps time starts looping (e.g. you reincarnate as yourself), or perhaps you even reincarnate as somebody else (in the future or in the past). Remember: we know very little about what is time and consciousness.
The "you have solved philosophy" message at the end of the game is very far from the truth.
I guess I may have misinterpreted this as a choice between forcing the passengers to go around in circles for literally eternity versus dying immediately. Figured that it'd get awful boring after the trillionth or so revolution, especially if there were no other options. At some point, I imagine the passengers would be begging for the sweet release of death and I didn't want to condemn anyone to that fate.
I pulled the lever. It said "eternity" and I couldn't possibly condemn the passengers to eternal life. Maybe it's because I read The Eyes of Heisenberg by Frank Herbert [0]
My reasoning was someone onboard might make a valuable contribution to the world even if they are stuck on the trolley, and the cost to the rest of society for letting it run seemed small.
I reasoned that the trolley is essentially just a scaled down earth, and plenty of people feel perfectly fine living their lives confined to that looping prison.
If you can communicate with the outside. Or maybe humanity exterminated itself besides the people in the magic trolley before you decided to get blown up.
But they are not immortal there are they (never read/saw it)? And it's 'slightly' bigger than a trolley with more humans so you can (and i think they do/have?) a small society.
1. Only pull the lever if you are sure. With an action you assume responsibility for the outcome.
2. Don't believe everything what you see or are told. If you doubt do nothing. Imagine you killed many people because you have been lied to and pulled the lever. In this case it is better you did nothing.
3. An action of yours might kill someone. This is a very heavy responsibility you assumed. You need to be very sure and the odds need to be extreme. None of the problems managed to tip the odds. So whenever I was afraid that pulling the lever kills someone, I didn't pull the levers.
4. If my life is at stake, I pull the lever to save myself.
5. If I am extremely sure that no lives are at stake and I am relatively confident that pulling the lever will avoid lots and lots of damage, then I pull the lever. If I am unsure, I do nothing.
6. In all other cases responsibility is too high and I won't do anything. The idea: don't touch the damn thing.
The website told me that I decided differently than the majority of people and finally that I "have solved philosophy"
EDIT: The top comment made a good point. I had to click a button to do nothing. This made me realize that I a had a silent assumption:
7. Time to decide is short. The trolley is already coasting. So I didn't think long and hard because I didn't have time and when I was unsure I would have let the trolley coast past without having pulled the lever.
I noticed I kept switching between several principles. Possibly triggered by the details of the problem. Some of the principles:
1. If you're able to act, you are already responsible for the outcome. Inaction is a choice too. There's no intrinsic moral difference between pulling the lever or not.
2. Acting from a position of ignorance is irresponsible. Better not to act than to take the risk of making the situation worse. (This contradicts #1.)
3. The person who tied these people to the track is the one who really carries the responsibility to this tragedy, not me.
All three of these are valid, and yet contradictory to some extent.
Apparently I solved philosophy at the cost of 59 lives. Not sure it was worth that sacrifice.
Imagine being at a railyard at the time of a crash and being seen throwing a switch in front of a moving train.
There’s no real-world case where that person is not considered responsible.
Not to mention that trains really cannot go over switches at speed. A runaway trolley would probably derail at the switch if it was set on the side track.
Exactly. In real life, I'd try to derail the train, and then call the police so they can figure out who tied these people to the track. The problem is a very artificial situation, and therefore hard to apply to real world ethics.
That's clearly a reasonable assumption, since a trolley will kill people underneath at any speed, no matter how low (the slower, the more horrible the death will be though…)
Only in the narrow situation where you know all possible outcomes and have a sufficient time to consider them. In the real world, given a limited amount of time and ignorance of many details, inaction is not even remotely the same as action.
Not sure why you state #2 contradicts #1. If you are in a position of ignorance, choosing inaction (unless and until said ignorance can be rectified, anyway) is the correct action.
> 4. If my life is at stake, I pull the lever to save myself.
I find this answer really interesting, because in the entire corpus, this was the most morally unambiguous question of all, and the answer is diametrically opposed to yours.
Would have I the courage to do so in real life, I don't know, and it's probable that I would not. But from a moral perspective it's damn clear: the only people I am unambiguously allowed to kill to save someone else is myself. Killing someone to save someone else is a moral dilemma as it makes me take someone's life, but killing myself isn't.
Interestingly, I had almost the opposite reaction. In the absence of information, I don't know how these people got here or why. Maybe they're being executed as criminals, maybe they are suicide attempts, maybe they are truly heinous people in the hands of a Just God.
But, I do know, if it's me on the track, I don't want to die. It's almost the only truly inarguable choice. With everything else, you think you know best for other people. With that one, you just know you don't want to die.
I think you're mixing up two things: willingness to die, and the morality of the sacrifice.
Most of us will be scared shitless if we were in that situation, and I'm pretty sure the majority (me included) will act as a coward. But nobody would blame them, because we all know we would likely have done the same.
But on the other hand, we praise the heroes who sacrifice themselves because we recognize that they did the right thing, no matter how much it costed them.
I think we have the right to save ourselves with the options we have available. I didn't tie these people to the rails, I didn't set the trolley in motion towards people tied on rails, I'm not the one that had any sort of input on what the safety systems should be for the trolley. If I pull the lever to save myself and 5 people die, they die as a result of circumstance and insufficient safeguards of the trolley system itself.
I also find it interesting. I wonder how extreme the parent comment would go? Would they sacrifice themselves to save, say, a million people?
A solipsistic view might say that those people are part of our own consciousness. If we let them die, we continue to exist. If we sacrifice ourselves, all of us cease to exist since they are just manifestations of our consciousness. As Christopher Hitchens once said, beware of solipsism...
> I wonder how extreme the parent comment would go? Would they sacrifice themselves to save, say, a million people?
On the other hand, how many people would you need to save to sacrifice yourself? 5? 2? 1? Does that one person need to be younger than you? Not disabled? Not a criminal?
This doesn’t seem morally unambiguous at all, and in fact society reached the opposite conclusion for many centuries in outlawing suicide but allowing some killing of others (e.g., the executioner tasked with killing the person convicted of attempting suicide).
An equally compelling principle might be that the only person I know to be worth saving is myself.
The ultimate trolley problem is one where you pull the lever to save a larger amount of people that have chosen a unsafe area while punishing a smaller amount of people that have chosen a safe area (unless someone pulls the lever)... Ultimately this is politics.
Morality is not math, but, as always, things might be more foggy: lack of free choice and a really skewed ratio (e.g. punish one to save 100 million) may test the limits of morality.
Please be explicit. Do you mean something like insurance: a small amount of people has insurance but someone decided to insure even the people who haven't paid insurance?
In that case it's clear that I won't pull the lever. It's not a question of life and death, and because I am not sure whether pulling the lever helps preventing lots of damage, I won't do anything, especially because perhaps it's just a vague and abstract story you made up.
Government builds a dam. It clearly recommends people to not build in front of the dam, it may crack and a flood would be a tragedy. 100K people ignore it and slowly houses appear: it's cheaper, better land, views, etc. 1K people choose to comply with Government and build in a safe but ugly and infertile area. One day, suddenly, a huge crack appears. At the last minute engineers discovered "a lever" to flood the safe area instead and save 99K lives. They don't have time to evacuate anybody. Should they pull the lever?
By limits of morality I mean:
- Free choice: if the people living in the unsafe area didn't understand the danger or didn't have another choice but to live there?
- Skewed ratio: if only 10 people lived in the safe area and 1 million had chosen to live in the unsafe area?
It's easy if you're politician that's up to reelection. If it's your base voters, then pull it, it's 100K of potential voter. If it's not your base voters then let them perish.
If you're enterpreneur, see whether 100K people is more beneficial than the other 1K. For example if you have a factory that needs many workforce and can utilize the 100K, save them.
If somehow you pull the lever, make a press conference showing sorrow and guilty face telling it's inevitable. If you don't pull the lever, don't expose the info until asked.
If you're not a decision maker, leave it to your supervisor. If somehow you have the loved one in 100K, pull the lever anyway.
This is alarming yet accurate. One simply has to look at things like automobile recalls, the 737-max debacle, or the Camp fire, to see actual examples of the above heuristic in action.
"A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don't do one."
More realistic example: The contractors in charge of the dam cut corners to pocket some extra money, but they'll be dead before the dam breaks anyway so they move into the valley. Land developers paid off the government to open the land for construction despite it being unsafe. Bankers gave predatory loans to those who would not otherwise be able to afford a house in the valley. The public has been misled and everybody is under the assumption that their society has reached a new age of prosperity where everyone can now live in the valley. At the last minute, the politicians, bankers, dam contractors, and land developers use their private helicopters to leave the valley before the flood crashes down.
Yeah one of the issues with the trolley problem is how literally you take the problem. If you take it perfectly literally and say "5 vs 1" (for the pure example) no other context I think you can very easily come to a conclusion that is the opposite of what you'd do in real life, where either due to lack of certainty, lack of perfect knowledge or some iterative game theory interpretation, you would act differently.
The problem is the responsibility. If you aren't willing to take over responsibility, it's better not to do anything. If you already have responsibility then it's a different matter.
The problem I have with the premise, is that as soon as you're standing next to the level, you're responsible. You had the opportunity to pull the lever, but didn't, which makes you responsible.
Similarly, if you're diving a car and someone walks across the road in front of you, are you not responsible for their death if you choose not to step on the brakes? Does inaction relieve you of any responsibility?
... then you accepted responsibility by getting in the driver's seat and taking control of the vehicle. In the case of the trolley, you are an onlooker.
re #4:
There are many instances of people sacrificing themselves to save others, Chernobyl workers, figher pilots not ejecting but steering the plane away flightshow visitors etc.
I expect these people to act this way routinely, give hazard pay to their entire profession to compensate for the risk, but likewise consider it my obligation to act likewise when the situation is clear (like in the Trolley Problem).
Isn’t choosing not to interact when you were able to still a form of interaction? I feel like once you’re exposed to the system and what it does depends on what you do (even if “doing” is nothing) you’re involved.
> Isn’t choosing not to interact when you were able to still a form of interaction?
It is. If you can't act, then it's not your responsibility, but if you can and choose not to, then it is.
In many countries, this is codified in the law: if you see someone drown or otherwise in lethal trouble, you're expected to do what you can to save them. You can't choose not to interact in order to avoid responsibility; if you're aware of the situation and in a position to act, you carry some responsibility.
Curious which countries. Fwiw, in the US, you can pull up a lawn chair and watch someone drown instead of offering them any assistance (not saying that is moral). Legal liabilities have been explored including the case where a person is pulled from a burning auto accident and the act of saving them caused back injuries and paralysis. Cue Good Samaritan laws to protect would-be do-gooders but also the acknowledgment that there is no legal liability to act (aside from designated first responder types).
Think responsibility. Even if you watched the thing roll you don't have a lot of responsibility. Only a little. Because as you said that I am involved.
Let's say there will be an investigation because of criminal liability by inaction (civil law jurisdiction). First you aren't in guarantor status (one example is the surgeon during a surgery), and second if you explain that you tought it through and showed that the what-if case is dangerous, you won't be punished for inaction.
So, no, it does not really matter that you are involved by being there.
And now I am sure that many people could develop post traumatic stress disorder because they feel they have failed to save people. I am sure that these people can be helped if they are shown the possible outcomes of action and inaction, especially that action is often more problematic than inaction.
Nice to meet a reasonable man who solved philosophy as well. So many think that they have a right to project their idea to someone else’s reality without a doubt.
I found myself following a heuristic that is basically Asimov's laws:
- whenever possible, do no harm
- do not let harm occur due to inaction
- when given a choice, preserve the most amount of healthy lifespan in aggregate
- higher lifeforms are more valuable than lower ones (cat vs lobsters)
- deferred consequence is better than immediate (since it opens the door to other later interventions)
It kind of really brings to bear how much of a thematic device the 3 laws are. There's no way to make them congruent with actual, messy, real-world situations. Also why the whole "self-driving car trolly problem" is a non-issue - there will never be a situation where the "AI" has nice neat consequences and a binary choice laid out in front of it. It's always going to be some collection of "preserve life as best as possible" heuristics.
There's a (famous?) quote from a google engineer Andrew Chatham who worked on self driving cars. When asked about the trolley problem, he said "It takes some of the intellectual intrigue out of the problem, but the answer is almost always ‘slam on the brakes’” [0]
I know, you try to save the trolley scenario. But if brakes fail then it doesn't matter what the AI (or driver) decides. To blame is the one who is responsible for the brakes not to fail.
What? There are still a lot of tactics a driver (or AI) can follow to improve the outcome if the brakes fail, such as swerving to avoid dangerous things while the vehicle naturally coasts to a stop, intentionally driving into a soft thing or up a hill if available (eg runaway truck ramp or a pile of haybales) or in the worst case, decide which of two bad things to run into is less bad. Not making a choice is a choice too, as in the trolley problem, it means you'll hit whatever is straight in front.
So in that sense it's quite relevant. Great, you can usually just apply the brakes. What if you can't or don't have enough following distance? (less common for an AI driver that will try to leave enough distance, but that's not entirely in your own control; another driver can merge in front of you at any time they wish and if an incident occurs before you can increase the distance, you will effectively have to swerve since you won't have enough braking capacity.
It is worthwhile to take a look at dashcam videos (eg, youtube, reddit) to get an idea of the many, many complex scenarios a driver can find themselves in in which a crash is imminent, but the driver has many options and must decide what strategy to take to get the least bad outcome.
But there are still plenty of either/or scenarios with a perfectly functional vehicle because there are other people on the road. Some of the interesting ones involve whether it is the AI car's responsibility to the drivers/passengers in the car first versus everyone else.
For example, a car swerves in front of you (or runs a red light, or whatever) and the car hits the brakes but knows it will not be enough to stop running in to the car. Do they swerve off the road where they might kill a pedestrian, but will definitely save the driver or do you continue to crash in to the other car which may kill the driver of one or both cars.
Is the trolley scenario intended as a blame-finding simulation? I thought it was more like “what (ethical) choice do you make in a difficult situation” deal.
You surely still have some agency after your brakes failed. Or at least if it happened to me I wouldn’t let go of the wheel to find out if I hit group A or B, confident in the knowledge that who or whatever I plow into it is not my fault and all choices from that point are equal.
True, I imagine the next course of action would be something like "do nothing" — continue straight, try to be predictable, hope that other people in the situation can react accordingly.
If we are still doing the trolley problem it would be “the brakes have failed, and … do you continue straight and definitely hit A, or turn the wheel and definitely hit B”.
You should not change lanes unless it is safe to do so. The car should let off the gas, put on hazards and try to move to the side of the road. I am with the other engineers, the trolley problem does not apply here. Why waste effort deciding who lives and who dies when more time can be spent figuring out how to handle these situations safely?
> - higher lifeforms are more valuable than lower ones (cat vs lobsters)
Choosing between preserving the life of one cat vs one lobster seems straightforward enough. But the trolley problem was asking whether one cat was more valuable than five lobsters. According to the stats, many people agreed, but how about one cat vs a million lobsters? Or one cat vs all the lobsters on earth? Most people would think that making lobsters extinct would be very bad (unless they really hate lobsters).
The difficulty is when we can no longer rely on intuition and have to come up with a precise exchange rate of when one being's life is more valuable than another's, which, like you say, is impossible to do in the complicated world we live in and our limited understanding of consciousness and neuroscience. In absence of that, deferring to the first law "whenever possible, do no harm" seems sensible.
I chose to kill the cat, because there are too many of them and they devastate natural ecosystems, killing birds for instance. Lobsters on the other hand are badly depopulated. In both cases undoing a wrong committed by humanity.
This is prominent in the elderly or children tests. Older people have gifts of experience that can be invaluable to the rest of us, especially when broadly shared. Children have great potential but through most of human history have been relatively cheap, easy to replace, and more of a value sink than generator.
Enough context is lacking. That's kind of what makes it all absurd. Take the cat vs. lobsters. Did you know felis catus is responsible for the extinction of some 37 species of birds world wide mostly due to colonization. They're not indigenous predators to the North Americas and enjoy the protections and long life afforded by their proximity to humans while killing for entertainment.
Lobsters on the other hand? Useful in their natural environments, have not become an invasive apex predator, could go on and feed many people too if properly managed and allowed to live their lives.
Yet the majority of people saved the cat thinking it's a "higher life form."
A bit absurd. Also the idea of trolleys without brakes and levers and cats that can't move off the track in the face of on-coming danger.
If we are to judge ethically, one particular cat should not be punished for the sins of the cat species. As far as we know this particular cat didn't do nothing.
Then we need to talk about criminal liability. In the sense that a cat can't stop itself from killing birds. We don't punish toddlers if they kill a bird, because they don't understand that they shouldn't.
Also, the whole idea of "native species" is highly subjective. What is "native species" today maybe it was an "invasive species" yesterday.
You're probably doing the local fauna a favour by eliminating the cat. The cat will kill other animals purely for amusement (by killing it and not eating it). It enjoys a longer lifespan than most local predators due to their proximity to humans that care about their well-fare, enabling them to kill more and for longer than any local predator. Stray cats are estimated to be the largest contributor to declining bird and mammal populations [0].
Also, in the city I live in, outdoor cats are illegal regardless of registration (which is also required).
The lobsters on the other hand, if you live near a coastal city to their native habitat, could be reintroduced and improve the local ecosystem.
> You're probably doing the local fauna a favour by eliminating the cat.
One could have said the same thing when mammals appeared. Not to mention humans: "These human species will ruin the ecosystem. We better eliminate it now before it spreads".
Said otherwise, you are thinking about the narrow interest of the day, without considering if the ideology makes sense in itself.
> Nativism is the political policy of promoting or protecting the interests of native or indigenous inhabitants over those of immigrants, including the support of immigration-restriction measures.
We're talking about cats here and ecology not human politics.
You're mistaking natural movements of species into new ecological systems with humans literally transplanting cats across the globe.
It's a fact that on some islands we settled, when we brought cats with us, those cats literally drove entire species of birds native to that island into extinction. The cats didn't walk across water or build ships to get there. They were brought there by humans and did what cats do.
This conversation is starting to get a bit absurd.
Soock was wrong — this is not how rational people behave. Rational people preserve their own life. If they die, the consequences of their choices can no longer matter to them. A cat isn't going to affect my longevity, but a cat is more likely to be a pet than five lobsters, and the emotional distress of the owner will have some greater expected impact on my longevity than the lesser distress of the lobster fisher who lost $60 from their catch.
Apparently I solved philosophy? Im gonna assume that title stays the same for everyone lol.
So, explanation for why so high.
Simply put; I don't believe 5 people really got tied to those rails each time. They are in on it somehow. If so, that is sick and twisted they would tie someone else to the rails purposely. So they deserve the trolley instead.
Now on the chance that they really are all innocent; I still have another problem with it.
How did they not overpower the person tying them up? The single person I can understand. Heck 2 people even. 3? 4? 5!?
no.
Something is up. That trolley is going over the 5.
(edit: Like seriously, these people would all have to have been knocked out with some drugged food or something at a party first... And that's assuming they stay asleep, don't struggle, etc...)
I am personally interrogating my response to this:
"Oh no! A trolley is heading towards 5 people who tied themselves to the track. You can pull the lever to divert it to the other track, killing 1 person who accidentally tripped onto the track instead. What do you do?"
I save the 5 over the 1. Only 15% agree with me. Why?
This is the first Absurd Trolley Problem (I think) that explained WHY a person was tied to the track.
I think the value of this hypothetical is in establishing the value of cultural relativism versus Kantian ethics.
In that framing, I'm really surprised that, on Level 27, 70% would rather send a trolley into the future to kill 5 people 100 years from now, instead of 5 people now. In almost 11K votes, this seems significant.
My view is that this provides evidence for the Bentham "hedonic calculus". (And I'm sure there are better scholars of Kant and Bentham than I that can argue for or against this.)
Here's a "political" example: Do you want to deal with problems now, or defer? 70% will defer. (I think this checks out, and is truly hedonic.)
So, I think the data, and the utilitarian approach shows: don't expect any of our societal problems (politically agnostic) to be solved any time soon.
>A trolley is heading towards 5 people who tied themselves to the track.
My reading of this was that they were suicidal. If people want to kill themselves, that's not for me to decide if that's right, or wrong. But if i can save one person from misfortune, atleast i was ablee to do that.
>70% would rather send a trolley into the future to kill 5 people 100 years from now, instead of 5 people now.
Would you rather have $5 now, or $5 in the future? Humans are compounding, i'd rather pay you in the future when there is more abundance.
I had the same thought, but this isn't necessarily true.
With money, it's true by design. Interest rates, inflation and money expansion are core components of the economy.
However, with humans it all depends on the fertility rate, and it's currently dropping. So in this scenario, there's a lot we don't know about the future.
I sent the trolley into the future because, all else being equal, at least it gives us 100 years to plan for the event.
Eh, I don't really buy the whole fertility rate, depopulation stuff that Elon and the like have been disputing. I truly believe that was him signaling support for abortion ban without him signalling support for it.
One thing has been constant through history, population growth. I have seen no evidence that this is slowing, only that 'Developed nations' are having less kids. People seem to forget that develop nations don't really represent 'the world' as a whole. Southeast asia, India, rural parts of China, South America do not seem to have this problem and they represent that majority of the world as much as the west would like to hide that fact.
Also, we've already proved that artificial womb technology is theoretically possible... it's only a matter of a few generations until thats going to be abused.
Regardless of what Elon is saying, it makes no sense to assume that exponential growth will continue forever. At some point (rather quickly), you run out of resources. Population growth at the global scale has begun to slow down. I don't know when it dips below 2.0 or gets to a stable state of 2.0, but it will eventually happen.
> In that framing, I'm really surprised that, on Level 27, 70% would rather send a trolley into the future to kill 5 people 100 years from now, instead of 5 people now. In almost 11K votes, this seems significant.
RIGHT?!?
I mean, who know's the potential reprecussions that portal might hold for us if we send that trolley through. Sure, 5 people might be saved today; but millions could be saved instead in 100 years when that portal isn't reopening to whoop our ass for killing 5.
I voted to send the trolley into the future, because everything else being equal (5 lives and 5 lives), it gives us 100 years to try and plan for the event.
100 years is a long time for humanity to look into ways to counteract time travelling trolleys! Sure it _says_ the future trolley will kill 5 people, but if I were in that situation I'm fairly sure I wouldn't be so certain.
In your "political" example: Do you want a guaranteed bad outcome now, or what we expect to be the same bad outcome later?
Maybe we have medical resurrection in 100 years, so killing people might not be so bad. Plus future humans get a perfectly preserved example of a 100 year old trolley out of the deal, which is presumably a valuable antique.
Wow! Like GP I got only 83. I must have messed up few things here and there!
My rationale was to result to default situations because that's how most of things work. Most of the general events are resulting to average behavior. This makes more sense when you realize world moves without your presence.
So by defaulting, I am ignoring my presence and seeing the result on what happens without me. It was fun decision to make.
In one case, I genuinely saved 5 sentient robots over a human.
There’s also the question of whether you should take responsibility for something that is not your responsibility, unless it es in fact YOU that tied them to the tracks.
The trolley situation is just meant to frame the "you have to choose" type decision without starting back and forth "whatabouts" that avoid looking into the actual question like "I simply wouldn't get into such a situation" or similar.
It's impossible to make some scenario 100% of people will find bulletproof so people just use the trolley scenario to convey the concept the question is about the balance of the decision not how the need for the decision came about.
Can't reply to the sibling directly so in reply to it:
The simplicity in the scenarios and the ambiguity of the scenario in this type of question is desired not a fault. The goal is to set up a stage for exploration of that nuance and "shades of gray" space just as you started exploring it in your comment. The goal is not to set up the question in some way that makes it so one of the answers or reasonings is right or appropriate. As such the stats only talk to which lever was pulled, if the question could have all of the reasonings laid out beforehand in a single page then it wouldn't have been a very good trolley type question.
As for talking about the reasoning freeform that's why it was posted here! For your particular example I chose the old people because of QALY. 5*<last 10 years of life> is fewer QALY than 50 years of life starting at a young age. There are probably a half dozen other reasonings I could think of and many more I can't and that's exactly what the goal of the question is, not about actually finding a most correct answer via rigid framing.
I'm surprised to see the popular answer to Question 3.
> Oh no! A trolley is heading towards 5 people. You can pull the lever to divert it to the other track, but then your life savings will be destroyed. What do you do?
Over 70% chose to pull the lever and destroy their life savings.
People die of preventable causes in developing countries today. By choosing not to donate your life savings today to help them, you are choosing not to pull the Question 3 lever.
According to Givewell, it takes $4500 to save a life in Guinea. So for every $4500 of your savings that you choose not to donate to Guinea, that's one person you are choosing not to pull the lever to save. Have $45,000 in savings? That's 10 people you're choosing not to pull the lever to save.
I doubt that over 70% of respondents are regularly donating anywhere close to their life savings.
My guess is that it is a question about responsibility. In the trolley problem it is my responsibility to pull the lever or not. If I don't lose my money people will die. In real life it is the responsibility of everybody. Maybe still very egocentric not to donate but at least your conscience can let you sleep at night.
An interesting take on it is that if, for example you have 90.000 in savings you could choose not to pull the lever and use your savings to save 20 people in Guinea, but also this would not be a popular choice I guess.
There's also a matter of proximity. In many of the other problems, people chose to sacrifice more people that they didn't know well over sacrificing someone (best friend, cousin, yourself) that you do know well. The charity problem isn't quite the same as the trolley problem, because it's saving someone outside of your tribe in a faraway land, vs. saving someone who is presumably local to you and about to die in front of your eyes. Also note that people frequently do give up their life savings (in medical bills, or GoFundMe) to save people close to them.
The charity problem is also unimaginably complex. What if I donate my entire (literal) life savings...then lose my job, go homeless, and consequently die? I've thus saved a handful of lives, lost one (myself), and doomed maybe dozens or hundreds of other lives I could've later saved! Uh, whoops. :|
Every moment, every (in)decision you make comes at the opportunity cost of many other choices, and their exponentially multiplying secondary/tertiary/etc consequences.
These thought experiments feels like (#0) solving an optimization math problem where you're:
1. Stumbling through an unimaginably large solution space and
2. Latching onto local minima (heuristics like "choose the fewest trolley deaths"), which are probably bad answers, only to realize
3. We don't even precisely know the objective function we're optimizing. So the problem has gone "up" one level. We first need to solve that optimization problem. GOTO #0.
The whole thing feels farcically hopeless and is maybe even a recursive or self referential minefield. Yikes. So the idea of avoiding it entirely by just winging it through life using your gut, or checking out entirely, doesn't seem like such a bad idea. Hell, it might be the only way to keep your sanity. It's an unsatisfying heuristic, but...oh look, we're back at step #2... :))))))
Not to mention a lot of people measure account their wealth in more than just money.
Reminds me of Richard Posner, former Chief Justice of the Seventh Appelate court. His essay on how poor people have no wealth so you need the threat of prison to keep them in line. And how middle class people you only need the threat of impoverishment to keep them in line. So you can just fine them. And wealthy people only need the threat of loss of reputation to keep them in line.
Which is to say Posner is an idiot that knows nothing of poor people. Because all an honest poor person owns is his reputation. Where Musk... when you are that wealthy someones always going to overlook your transgressions.
Evidently he also didn't know anything about rich people. If they manage to damage their reputations then they figure a good PR firm and some contributions to the right places will take care of that for them.
I think it used to be true. The Musks and Trumps have, like Neo in the Matrix, decided they have total control over their actions with no regard for mere perception any longer. Some people think the old framework is still valid, and others think it no longer applies.
> I really do want a comprehensive list if possible of which charities are actually worthy of even being allowed to keep existing on this planet.
You're in luck! That's exactly the problem that the Effective Altruism movement aspires to solve (namely, how to do altruism effectively.) And Givewell, in particular, is a great site for that - you can either donate to them and have them spread your donation over their "top charities", or donate to the charities yourself.
They do a lot of research, very transparently, to figure out which charities are actually achieving good outcomes. You can read their research, their thought processes around it, etc.
Note that this doesn't necessarily mean those charities have "zero overhead", as you alluded to in another comment - that's not a good way to judge a charity, in general, the same way it isn't a good way to judge a company; we don't really care how much money it takes to run a company or how that money is used, we care about their profit. Similarly, if a charity uses a lot of money on things like staff, but this makes them more effective at actually (say) saving lives, then it's a net better charity to donate to than other charities.
If five people in my immediate presence experienced such peril then watched me sacrifice my life savings to save their lives, I would expect their testimonies of admiration and relationships backed by a life-debt would end up being an asset of equal or greater value than a tremendous lot of liquid cash. I won't lie; I want to be whole-heartedly adored by them, even more than I want the rich man's $500,000 bribe.
It's a false comparison in my opinion, but I do get your idea.
With your scenario, there are two elements to consider:
1. Bystander Effect, i.e., someone else can help in this situation so I don't have to
2. People likely reason that Guineans still have some agency to try to subsist, so the threat is not as immediate
Compared to the Trolley problem proposed, there is a decision to be made _now_ that only you have control over and there will be an immediate effect in that people's lives will be saved in a situation where they have no option to help themselves or even get by at a bare minimum.
Ignoring the "I'm absolutely broke so who cares?" people (which while a fair point I think isn't quite the spirit of the scenario, which is will you sacrifice items of significance to you for the lives of others), I think that your scenario has too much distance and layers of abstraction as to how the money helps. Fundamentally, if we can't actually understand specifically what the charities do to save a life in Guinea, so it's harder for people to accept that the decision has any significant impact.
So I don't see any real inconsistency between your scenario and the trolley one. People aren't as accepting of the premise that the charitable donation immediately saves a life and count on the help of others whereas pulling the lever immediately has an observable effect.
I think a lot of people would pay money in the situation. You sending money to someone whose suffering you never even saw in Guinea and never hearing whether it helped is indeed a different thing than this trolly problem.
What did you answer for problem 6, where a rich man offers you money to pull the lever and kill someone else? Presumably, a rich man is likely to have more economic productivity than a random person.
Like most people, you probably drive a car from time to time. Collectively, cars kills 30-40k americans every year. By choosing to drive you are a fractional murderer - statistically responsible for some slice of a death. Not many people will decide not to drive on learning this…
How do you know if you’re a bad driver or a good driver? It seems like a pretty blatant tautology to say that only bad drivers kill. Willing to bet almost every one of those fatal car crashes were caused by people who would have said they were a good driver…
I think many people would agree in the abstract that donating some amount of their savings to developing countries is the right thing to do -- many more people than actually do this in practice.
Saying something ought to be done is not the same as doing it, basically.
And of course, while donating to poor countries is fundamentally similar to Question 3, in practice it's obviously not the same; people do seem have a 'moral discount' for things that occur farther away (geographically, temporally, or otherwise).
Silly random thoughts that went through my head at this question
1) Many people's life savings is 0. A bargain!
2) If _I_ were one of the five people, I'd try and make it up to the lever puller. Lever-guy might even come out ahead.
3) If I pull this lever right away in question 3, if there are later questions about giving up my life savings, they'll be free. This might be the value lever pull! I might be kicking myself later for missing this deal.
Other people have worked a long, long time for their life savings, and it's very likely that, as appreciative as these people might be, it's just as likely that this person gets a sincere thank you but no financial compensation. In the right situation, I could see it leading to severe depression as the puller feels they gave up everything and nobody cares.
I took the Giving What We Can Pledge[1] and donate 10% of my income to effective charities for this reason. It barely makes a difference to me and it saves more than 1 life a year. They also have a really cool How Rich Am I calculator[2] to put it in context.
I still chose to pull the lever though because I don't have $18000 in my life savings yet.
By that logic, somebody defrauding a charity out of $4500 should be punished roughly the same as a homicide.
(Not saying that's necessarily wrong, just to give some perspective on the "fun" corners you get into when putting a monetary value on human life, and it's so much different from the value you get where you live).
Only if they defrauded one of the most effective charities (as opposed to $4500 raised to buy some new musical instruments for a school) and only if punishments should be determined by total harm caused.
Harm caused is usually one of several factors we use to determine punishments, which is why attempted murder is punishable even if the perpetrator failed to cause any harm.
(Not saying our current system for punishment is necessarily right.)
It depends on how much people's life saving is..
It can be even negative...
not everyone is in your situation.
There are pragmatic reasons as well. For me, people spend a lot to be happy like buying luxurious stuff or go to the movies/good restaurant. Saving people makes me happy. I am willing to pay for my own happiness.
I answered with the majority because I followed John Rawls' condition of the "veil of ignorance", which means that the only information I have to start with is:
1) an average person's life savings is less than 1 million
Contrary to popular belief, there's nothing unethical about attaching a price to a human life: it's a very uncomfortable question, and I think there ought to be some disagreement on what the number should be...
But money is fundamentally just a unit of value. We use it to compare the value of different things. We can use it to convert between labour (someone's time, which is a fraction of one's life) and goods and services. We can also use it to measure lives: 5 statistical lives lost is worse than 1 statistical life lost, all else being equal. We can also use it to weigh years lost to disability, etc.
Note that this doesn't mean things always have a fair price, and it says nothing about whether money is fairly allocated in society. And it certainly doesn't mean that people can be replaced with money. None of this is the point.
As a thought experiment, ask yourself the following questions:
- Is a human life worth more than nothing, or zero dollars? (I hope you will say yes)
- Is a human life worth more than 20$? (I hope you will say yes)
- Does a human life have infinite value? I hope you will agree that this is not the case: if it was, then we would be justified in sacrificing all of the world's resources combined, in order to save a single life. The paradox is that this would come at the cost of all other lives, which is clearly nonsensical. Even a pharaoh's life, or a king's life, is not normally considered to be worth the combined lives of all of his subjects, from the perspective of modern society. Lives have some non-zero, but finite, economic/social "value".
- Is there a fair price, in dollars, for an hour of labour or work? If there is, then you accept that there is some conversion factor between a fraction of a human life, and some amount of dollars.
Putting a price tag on a human life is something that is so important, that we ought to discuss it more often. Whether we put the price too high, or too low, is often what explains political disagreements on the provision of public goods and services. We do ourselves a great disservice by not forcing ourselves to come up with a concrete number, or by not being honest about what our numbers are. We should challenge each other's assumptions and ensure we put a fair price on it, and ensure that this price is based on the things that matter to us as a society.
Judges, actuarial scientists, and economists, all do it explicitly. Nobody agrees on a specific number, but they all pick a number. The rest of us? We do it implicitly, whether or not we're aware of it, every day of our lives. The problem is that our value judgments and our choices, especially if they are not conscious, are often not self-consistent.
Thanks for the thoughtful comment. A lot of this thinking does line up for me, but it seems like there's something missing. Why do we all feel a little bit sick when we hear about car executives doing calculations to determine whether to perform a recall or not? Isn't it because we're putting a financial price on life? Or is it just that the price we're hearing is too low?
The most infamous recall was the Ford Pinto, and it was particularly bad because they knew about the problem before the car ever shipped, and once the problem was known didn't fix it for years even though the cost would have been $11/car.
Less directly, there's the fact that new safety features often come to luxury vehicles first, and might eventually trickle down to cheaper cars due to competition and regulation (backup cameras, automatic emergency braking).
I think mostly we don't like to think about it. This comes up particularly in the health care industry. There was the (manufactured) furor over "death panels" but there's also the very real challenge of deciding whether it's worth paying possibly millions of dollars on a drug that might have a marginal-at-best improvement in someone's survival odds. And it's not just about some corporate executive's pockets; these costs get passed on to regular people through increased premiums and may cause people to drop their health insurance entirely.
> And it's not just about some corporate executive's pockets; these costs get passed on to regular people through increased premiums and may cause people to drop their health insurance entirely.
Not only that, but even in first world countries (ie.: countries that have a good universal/public healthcare system), the dollar value of a statistical human life matters as well. There are waiting lists where you have to decide a priority rank of who gets the organ first, or who gets scheduled in on Monday for their surgery vs who has to wait, etc. Nobody likes that it has to be this way, and nobody wants to be the person that decides (in reality, it's often an ethics committee and there are rules and guidelines to follow), but that's how it has to be.
The money used to save and extend lives has to be taken from somewhere, and it's taken from the money that represents a fraction of the population's labour one way or another (it's a bit more complicated than that, especially when you consider monetary policy, and non-labour sources of income, but the general idea stands).
If you don't put a dollar value on it, then you're going to be making a lot of subjective decisions based on how you feel resources should be allocated: you're going to be taking from some people and giving to others, based on a gut feeling rather than a system. Your gut feeling might be the right one from time to time, but chances are it won't be self-consistent, and will be very biased.
I think even if we all agree that there is a price, it’s hard not to feel discomfort about that.
Depending where in the world you are, you may have grown up to believe that humans are more worthy than animals because we have “souls” or whatever that means. A lot of our culture and religion deeply reinforces the “humans are special holy magic” feelings. I won’t develop this thought to great detail unless someone wants me to.
But in addition to that, you’re absolutely correct in saying that oftentimes the price we put on life is too low.
In addition to that, even if the price is correct, it may be difficult for us to have a good perspective on that, simply because we are talking about large numbers (we’re really bad at large numbers), and because pricing stuff is hard. Even stocks are hard to price, and people argue with each other about wildly different price targets.
Well, once the event hits the news and it comes out that you sacrificed your saving to save some people, then it should be easy to set up a gofundme and recoup the costs, at least if you're like me and your life savings are paltry. You may even profit!
I picked that option. Based on the knowledge that it would take me around 6 years to regain my life savings at the moment and it would take me over 6 years to get over the decision of picking the savings.
If I very publicly save 5 people from a violent death I can easily play that off into much more than my life saving are worth with a couple of book deals and maybe a movie
If you factor in the legal implications, the trolley problem becomes trivial. Do nothing. I'm not qualified nor allowed to operate train infrastructure and the legal consequences will become worse if someone dies because of something I did.
The crux of the dilemma is that there are two solutions. If there aren't two then it's a lemma and the trolley problem is solved.
I didn't introduce the convoluted idea of there being train tracks and a switch. If it was purely hypothetical, you could've asked: "Would you prefer to let 5 people die or let 1 die"
Here's another hypothetical:
"A train is on a track and you are standing on bridge above it. You spot a couple of kilometers further down the track 5 people who are going to be run over by the train. Do you push someone standing beside you over the edge and onto the tracks so the train will register a collision and stops in order to save those 5 people?"
This is the exact same scenario. Only instead of pulling a lever, you have a human interaction. The percentages of who would kill that 1 person change when the trolley dilemma is asked in that way.
If we're getting to the crux of the problem, then why does the response change when you provide the exact same scenario but replace the mechanical with a human-to-human interaction?
I don't see how that trivializes the problem. With that additional consideration, the options are:
1. Let 5 people die and 1 live, and you probably don't have any legal consequences.
2. Save 5 people, cause the death of 1, and you probably go to jail.
So the thing that made the answer trivial was the introduction of legal consequences that affect you personally? Doesn't the difference of 4 deaths dwarf that?
No, it doesn't. How I look at it is those (legal) rules weren't made randomly. I don't think it's an option for me to overrule what millions of people have decided on over the course of a couple of centuries. Especially if I only have a couple of seconds to think about all the implications of my action.
The calculation for me would change if there was a choice between killing 5 people and potentially killing nobody. But that's not the hypothetical here.
If those rules needs to be changed, then change them through debate and well reasoned arguments and not a split second decision. The "good samaritan" law is an example of this. If perform CPR on someone who's heart has stopped, they can't sue you if you save their life but cracked some ribs.
> I don't think it's an option for me to overrule what millions of people have decided on over the course of a couple of centuries. Especially if I only have a couple of seconds to think about all the implications of my action.
I find it bizarre that you're taking the side of the, say, thousands of transit policymakers--who are certainly not taking this hypothetical into account--over the majority vote of the public on this exact ethical issue. Not wanting to reason from scratch in the moment is fine, but you don't have to. This is a well-known dilemma, and the consensus is that you should kill one to save five.
> If those rules needs to be changed, then change them through debate and well reasoned arguments and not a split second decision.
Yep, that's why we're here. Now that most of us have agreed that "pull the lever" is the right call on the trolley problem, do you think the transit authorities are going to codify it as a law? Don't be ridiculous. Just take the legal hit, if it even comes. Laws are wrong sometimes, especially in hypotheticals.
I would never criticise someone for making the decision to pull the lever. I understand that decision, I really do. I only said I wouldn't do it and explained why.
It's not as clear cut as you think it is. If you think it is, then write to a philosophy professor and say you've solved the trolley dilemma through an internet vote. I hope this sounds absurd to you.
You can play with numbers, but if I were to pull the lever, I would feel I would kill someone for the sole reason they were in the minority. Which is an extra reason for me not to do it.
> I would never criticise someone for making the decision to pull the lever. I understand that decision, I really do. I only said I wouldn't do it and explained why.
And I wouldn't critcise someone for not pulling the lever... unless the primary reason they didn't pull the lever was the legal consequences they would suffer. That's a terrible reason.
You have no problem when someone decides to kill someone and thinks the laws doesn't apply to them. To make matters worse, you think someone who tries to obey the laws makes a terrible decision by not killing anyone.
If killing someone doesn't give you a pause, I would be afraid to be a minority in whatever society you live in. The laws, which you so easily dismissed, gives some rights to minorities.
What's next? Here are five people who are dying... Let's kill someone in the minority to get their organs and transplant them. It's ok! Five people will survive, while only one dies.
Laws, whether or not you like them, are there to also protect minorities.
> You have no problem when someone decides to kill someone and thinks the laws doesn't apply to them.
This is the most dramatic mischaracterization I've read in recent memory. Who has "no problem" with either outcome of the trolley problem? "Oh, one person died? No problem!" Did you consider how strawmannish that sounded before you wrote it? And as for the law "not applying", that's not part of the conversation either. The law will certainly apply, as it must to preserve societal order. But the pragmatic application of the written law is not always aligned with what is right, and when multiple lives are at stake, the legal consequences themselves are not especially, well, consequential.
Take this euthanasia hypothetical as another example of the point I'm making:
Your spouse is experiencing horrific, constant, neurological pain. It is a given that they will die in the next 48 hours. The only treatment available to you is ineffective at treating the pain, but at high doses it will cause immediate death. Your partner is aware of the situation and has requested euthanasia. Do you euthanize them?
It's a highly contrived situation, but then so is the trolley problem. I believe that people should have an opinion on this problem that is not swayed much by the possibility of legal consequences to themselves. The consequences either way are so incredibly dire that the opinion of some disinterested court referencing a law that was absolutely not written with this situation in mind should be practically ignored. If somebody said to me that they would have euthanized their spouse if the law in their state hadn't said they couldn't, but since it did say that they just watched them suffer, then I'd say that person is a coward. I'm not saying they should have done it and then tried to avoid the legal consequences, I'm saying they should have done it and then accepted the legal consequences, because the consequences to the other party are so much more drastic.
That's all I'm saying about the trolley problem. If you believe in switch-pulling the trolley problem, you should switch pull without considering your local laws, or how much you could spend on a lawyer. If you don't believe in switch-pulling that's fine as well, unless you are somebody who actually does think switch-pulling is the right thing to do but not if there are legal consequences to yourself. Again, that's a terrible reason. Believe one way or the other and act on that. Don't outsource your morality to the legislators when you're faced with a once-in-a-lifetime moral dilemma where people are guaranteed to die at the end.
> 2. Save 5 people, cause the death of 1, and you probably go to jail.
You'll be prosecuted for sure, but if the death of the 5 was certain unless you acted, I'd be pretty surprised if you were found guilty, let alone being jailed.
You're implying that you'd go to jail for the lives of 4 people (or that people should), but recent experience has shown people won't even wear a mask to save other people's lives.
Yes, people are shitty utilitarians. I can think of a thousand equally egregious examples that we're both guilty of. Doesn't mean we wouldn't be good people when presented with a social situation that our culture and instincts actually prepared us for.
Think about how many people have sacrificed themselves for others. Not just jail time, but death. Would each of those people have lived completely pure, selfless lives if they hadn't done what they did? Probably not. Who knows, some might have ended up being anti-maskers. It is said that "dying is easy". Making one clearly right choice, damn the extreme consequences, is actually very normal for humans. Just as normal as spending a whole life making bad decisions. Even more bizarre is the two "modes" aren't even mutually exclusive.
Is there any proof wearing a mask save ANY lives? People talk about this as if it were some fact, it's usually same people believing safe and effective vaccines, who quickly forgot vaccines were supposed to stop transmission, stop symptoms, they just keep pushing goal posts, while reality if you look at stats in highly vaxxed countries is that vaccines do really nothing anymore.
You spread a lot less virus if you wear a mask. Cheap masks don't protect you, but they do protect others from you. Countries where everybody immediately wore masks had far less cases than those where a lot of people didn't. The whole mask issue was the ultimate test in altruism: do you accept slight inconvenience that might save other people's lives? A lot of people failed.
Masks work only in the lab when worn PROPERLY at ALL ocassions, never meeting strangers without mask, good luck with that. In real life people have to eat without mask and they won't wear it at home or often in work etc. or they don't wear them properly anyway with gaps around.
So yeah, masks work in theory, but reality shown us they are completely useless, especially with newer more infectious variants. If they were such great invention we would not have flu and other respiratory viruses, in the end everyone will get infected anyway.
They are only useless when half the people don't use them and ignore all other measures while they're at it. Obviously there's no need to wear it at home; that's just stupid. But outside, in public space, it catches a lot of those particles that can carry the virus. If everybody wears one, that means less virus particles in the air, and less chance of infection.
And during the pandemic, a lot of countries did indeed not have their usual flu season.
Numbers of vaxxed vs unvaxxed hospitalized (per capita in same group) don't confirm this disinformation spread by pharma companies and their paid politicians.
Page 5, new hospitalizations, 7 day numbers per 100K people in each group
June 2022
unvaxxed 1.4/100K
vaxxed unfinished 1.3/100K
vaxxed 2 doses 0.5/100K
vaxxed with booster 1.7/100K
To be fair most of these people are hospitalized for other reason than COVID regardless their vaccination as was always the case even when they were spreading propaganda about hospitalized unvaxxed. Numbers in ICU look better for vaxxed, but there are way too many factors to consider, you are more likely to have better life style as vaxxed risk group than unvaxxed risk group, which doesn't really say vaccination works.
"Nothing" would be my choice regardless, morally and philosophically. In each case you'd likely be sued by the person(s) you'd kill and they'd probably win.
If you don't intervene circumstances play out and you are blameless. In the other case you are choosing to kill someone, taking their life based on an idea in your head which may or may not be valid. In reality nothing is so clear cut and the people at risk may not have died anyway and you may kill someone needlessly.
That is not to say I wouldn't, for instance, defend someone being attacked. In that case I'm not causing someone else's death by my actions. If the attacker dies during my defense that's fine because it's due to his actions, not mine.
It's not trivial, dunno about US, but in many (developed) countries around world not providing help to someone (passing around car crash without stopping, if you don't see anyone providing help) has legal implications.
When you are certain your actions will result in the death of someone then I doubt the legal protections of helping someone is as clear cut as you think it is.
Maybe I'm wrong, I don't know all laws of all countries.
I always found the trolley problem interesting. It usually boils down to who believes in karma or a creator or not.
Most religions have the idea that it’s different to take a life than to stand by and do nothing. For instance, you should always try to help others (save a life), pretty much above all else. However, to take a life, requires taking action. Ie standing by as someone drowns is not the same as sticking someone under water. For murder you’re damned to hell. For standing by you’ll need to repent, but it’s a lesser sin.
The trolly problem IMO is a framing problem. (1) it assumes you know the future and (2) it assumes your will is above others.
The example I typically gave people when discussing this problem is actually in this fun exercise. Imagine if the 5 people strapped themselves to the tracks. Imagine they knew they would murder and wanted to die rather than murder. When you redirect the train, you actually cause more deaths, because you didn’t know the intent of those people.
The exercise helps you decide what you value. For me I never apply my outside influence to the system, except when I can save a life without costing a life. I believe life is more valuable than pretty much anything I saw in the game.
I’m a realist and objectivist. So for me the question is “what could I live with?” And “what information do I have to make a decision?”
In reality, everyone in this situation made their bed (so to speak). So I am unwilling to ever impose my will into the system baring saving a life (without costing one).
One of the problems did have people who tied themselves to the track, while the other person stumbled. Most people chose to let the group that chose to be on the track die.
Another interesting problem would be 5 people tied themselves to the track in front of the trolley, and 1 person tied themselves to the other track. Everybody tied themselves to the track, so should you let the one die instead of the 5? Or did the one tie themselves to the track believing that track was safe, while the other 5 tied themselves to the track expecting to die?
You're assuming free will though. It could be that nobody made their bed, and you're just doing what was preordained, but chalking it up to free will to feel better. Maybe you could live with anything. Maybe the information is a facade, and never really makes a difference, because you'll always just do what you were going to do anyway.
If there is no free will, though, then there isn't an interesting moral quandary about you pulling the lever. You either do, because you were preordained to do so or you don't, because you were preordained not to, and in neither case does any morality or decision-making enter into the picture, since, absent free will, you are not a morality-possessing or decision-making entity.
There's no free will in a clock but it's still interesting to watch the gears. And just because you don't have free will doesn't mean you don't have morals; even if you don't choose to do right or wrong, you can still distinguish them as separate concepts. And even if you can't distinguish morality, that doesn't necessarily make you amoral either. It's like being born to be an extra in a play and die once the play is over.
In a world with strong determinism, there is no consequence. Actions don't cause reactions. The initial state of the system causes all outcomes -- both immediate and apparently-consequent.
The person doesn't really pull the trolley lever insomuch as the universe began in a state which determined that at that moment, the person's arm and the lever would move.
In such a world, even time is barely meaningful, since events, not being dependent upon each other, don't really have a causal ordering.
If you put your hand over a flame and it hurts you pull it back. It doesn't matter if it was "determined" that you would put your hand over it, there is still an obvious and immediate reaction to the action. The time it takes for these things to happen is also meaningful as it effects how much pain you feel.
So even in a completely determined universe, there are consequences, and time is meaningful. The only difference is whether you realize that there's a universal puppeteer or not.
I disagree with how you’re assessing the magnitude of the sins here. I don’t think the problem really changes at all when you introduce religion to it. There’s theistic arguments in each direction of this problem, and much like with the non-theistic arguments, you won’t find any of them to be conclusively correct.
Interesting then that a religious society like the U.S. (>80% consider themselves religious) does not actually support helping others. This is particularly striking for law enforcement, who have no obligation to help, and then are free to choose not to. Last month, cops watched a man drown, and that's apparently perfectly fine https://www.theguardian.com/us-news/2022/jun/06/arizona-man-...
Humanist secular societies (e.g. France, Germany, ~30% religious people) instead have a culture and legislature that makes helping others an important duty; law enforcement and also civil citizens have a moral and legal duty to help, and it would be morally and legally unacceptable to watch someone drown.
This might be something for my own trip, but the "trolley continuing for eternity" question was interesting to me.
I experienced it as if the universe (the creator) was asking itself (me) whether it should carry on doing itself (literally, and figuratively) or stop and bring the trolley ride of life to a complete end.
Enough philosophy for today.
returns to pretending that being a jelly-covered skeleton that has a hole in its face to put food which eventually comes out of another hole 3-inches from their magical life-creating sex organs (that they rub together to make new ones) and waking up every day to a world where they sit in front of a box that requires they press buttons in the right sequence to make their own life continue, is completely bloody normal
Ending with a contextless kill count was the perfect touch. This feels almost as profound as it is silly, but I'm not sure how meta-level the profundity is. Lovely site for sure
Anybody's response is meaningless unless we take into account the context they have in their heads.
A very common piece of context is: "If you touch something and people die, it's your fault. If you don't touch anything and people die, you can very convincingly claim in front of a judge 'It wasn't me, I didn't touch anything!'"
That's a very different setting than "I'm a philosopher who wants to show how much I care or don't about total utility to other philosophers"
This is exactly how I've come to think about them as well. If you get involved with something, someone might blame you.
I think that's why (bad) managers often just don't seem to make any decisions. You need to know if the feature should be like X or Y, why won't they just make a decision and tell you? It's because they instinctively understand if they can get away with not making a decision (e.g. you stop waiting and just go for one of the option, or you contact someone else to make a decision) then there's less chance they'll be blamed for stuff.
I think the emotional context / trauma is interesting as well when considering many of the scenarios. Regardless of some sort of external validation, how will you feel after incident? The "If you touch something and people die, it's your fault" description is very clean and logical. or something like "I don't want to play god, when there's so much uncertainty", and lots of people leave it at that. But what would your conscious hoist upon you in the days and years after you made your decision. What could you _actually_ live with, not what do you think you could live with.
I think trolley problems are mostly linguistic. Take this wording: "Oh no! A trolley is careening into a crowd! Should I try to turn it so it goes through the sparsest part of the crowd?" I think most people — by a wide margin — would say yes, because it's worded as still hitting the same thing, just hitting it in a way that does less damage, and we feel differently about that than hitting a different thing, "this guy" instead of "those guys".
That's not really just a difference of language, it's a different situation from the base trolley problem. You're describing a realistic situation where the potential outcomes are fuzzy and you're just trying to minimise the probability of damage. The base problem is binary - definitely let 5 people die or definitely deliberately kill 1 person.
If you believe the probabilistic and discrete versions of the problem are the same that just means you fall into the utilitarian camp when it comes to this thought experiment, and believe that the outcome is the only thing that matters.
From an ethical point of view, the biggest problem for me was when there were two bad choices to be weighted against each other, e.g. 1 person on a track dies against 5 persons on the other track. In most of these cases, I "did nothing" - because I don't want to play god and I knew where this leads: There are ethical edge cases that I definitly do not want to think about. I am not super convinced about this in hindsight: E.g. Triages exist in hospitals to help doctors find the least worse outcome among many bad options. Most, this involves metrics such as number of dead patients, or number of lived years lost (e.g. prefer to safe children before elderly).
> I "did nothing" - because I don't want to play god
I can't understand this perspective. Deciding not to pull the lever is a decision that you've made. You are "playing god" all the same if you decide to sit there and not pull the lever. There is no ethically important distinction that comes about by the mere fact that physical movement is necessitated by one of the decisions versus the other decision. If the framing of the question changes from lever-pulling to "you must press either button A or B", and that re-framing causes you to make a different decision, then I question the ethical assumptions going into the decision making. The ethical core of the trolley problem is that you're asked to choose whether you prefer one outcome versus another -- the precise physical movements you must carry out (jumping jacks, lever-pulling, blinking, button-pressing, or just sitting still) to lock-in a particular decision are irrelevant.
I agree, but how do you decide e.g. if there's one disabled person on a track, and one not disabled on the other? It's that I don't want to make these decision (and luckily, I am not forced - I didn't design this game, I did not put the people on the track in the first case - I can decide not to play the game, and in reality, I can always ask "Is there perhaps option C that saves both?").
The core ethical point of the trolly problem is that you have to take an action.
In real world scenarios there are most often significant distinctions between taking an action and refusing to take one.
The physical action is a red herring under a moral system that takes consequentialism seriously. The trolley problem shines a light on this red herring.
Firstly, whichever decision you make, you have to take a real physical action. Suppose you decide not to pull the lever. You still have autonomic arousal, neurons are firing, your prefrontal cortex is inhibiting you from pulling the lever just for laughs, and so on. Lots of atoms had to move around to come to that decision, which you would label as "inaction". I disagree, it is action. Only a minority of caloric expenditure is associated with the actual lever being pulled. Not that this really matters, anyway, given that the outcome is what's ethically important.
Secondly, you should consider the consequences of thinking that pulling the lever somehow matters ethically. That means that in certain edge cases, you're surrendering the choice of who lives and who dies to the arbitrary whims of the experimenter. If I was a malicious experimenter, I could maximize the number of dead people by just jury rigging the allocations. If your ethical framework leads to a situation where a maximum number of people die, I would argue that there's some faulty ethical reasoning there.
My mental framing of situations like this is "One shouldn't play god. If expecting to be faced with choices about life and death, one should prepare. Get very, very good at the god job."
We humans have outsized impact on the world around us. Often, we don't get the luxury of no-choice. Not if we want to survive ourselves.
(One variant I've never seen is "You realized that creating a system of moving multi-ton vehicles at street level over long distances at moderate speeds could result in fatal collisions. Do you refrain from inventing trolleys?").
> I am not super convinced about this in hindsight
Yeah, trolley problems are a philosophical tool to help work out the ethical edge cases for situations like the example you gave.
Though they can be a distraction by reducing to the wrong bad choices. When trolley problems became mainstream because of looming spectre of self-driving cars, the example of "should the car drive off the bridge and kill the driver rather than hitting a child on the bridge" was making the rounds. I was always on the side of "I don't care if, in this absurdly rare case, the car decides to run over the kid AND then drive off the bridge because human car drivers kill more the 30000 people a year in North America alone".
And, as aside, ahhhhh, remember those days of our biggest problem being that self-driving cars were about to own the streets and put thousands of truck drivers out of work.
If you were driving a car that was plowing into a crowd of people, who you try to steer it towards the less crowded area that is still hitting people? Or just merrily let the car plow through the thickest part of the crowd? I find it hard to believe you wouldn't make the active choice to steer it away.
If you were a son, would you save your mother or your gf?
The "correct answer" is your mother.
"China's ministry of justice later posted the "correct" answer: exam writers are duty-bound to save their mothers. It would be a "crime of non-action" to choose romantic love over filial duty. "
But I think this trolley test might be extended to pull a lever to kill man instead of a woman. Which to I think we have some very sad notions of equality...
They're not equal. E.g., if you value the survival of our species, the average woman is worth more than the average man. But if you worry about overpopulation, it's the other way around. These problems are simply too artificial to make any meaningful distinction.
As a compatibilist, I don't know what to do with level 28 ("Oh no! A trolley problem is playing out before you. Do you actually have a choice in this situation? Or has everything been predetermined since the universe began"). Both statements can be true! Although I guess that means there is no choice :(
Determinism doesn't mean that we don't _choose_. Free will is an incoherent idea, but choice still exists in a deterministic universe. Choice is just the outcome of that determinism.
I found that problem kind of silly. I chose the free will option because it seemed to imply that determinism would just let the people get run over. I don't know if it would have been the same in end, but if the free will option was the only one that saved lives it strikes me as an unfair way to discount determinism as some sort of passive "I'll just let people die" mindset, when arguably the neuron interaction in my brain would make me choose the same result in identical circumstances.
> Oh no! A trolley is heading towards 5 people. The lever just speeds up the trolley, which might make it less painful. What do you do?
I voted no, cause really? who the hell would SPEED UP a train to hit someeone because it 'might' make it less painful. Disregarding the million other mights that could be assumed. The logic here is astounding & yet.
33% of people agree with you, 67% disagree (20,485 votes)
I guess the lesson here is people act on assumptions as tho they are fact.
I was surprised about that one as well.
Here is another assumption: Speeding up the train might reduce their chance of being saved by a third party. So by speeding up the trolley you might make it less painful, but also might rob them of the chance of being saved.
I know this is just a thought experiment, but there are a million variables to a scenario like this and just adjusting for the first one you see is a slippery slope.
A lot of the problems are structured as parallels to real world problems. While strictly speaking I voted "do nothing" for that problem, I wholeheartedly support the real world parallel which is euthanasia (speeding up the trolley to reduce the pain of inevitable death).
It's going to happen anyway, so there's a net positive effect of speeding it up. I can't imagine people not choosing to do so.
> I guess the lesson here is people act on assumptions as tho they are fact.
As I wrote above: these problems are too artificial to draw real conclusions. They're thought experiments. The only information you can use is given. For the rest, you have to fall back to "defaults" (each life is worth the same, etc.).
If you look at it like that, it's interesting to see how many people would opt to run over the rich man.
I voted to speed it up. "I don't know everything so I remain cautious" is a mentality I applied for some problems, and not for others. In this case I thought the problem would be trivial/boring if considered from that mentality, so I instead made a choice between condemning someone to a 10 second life of 10/10 pain and condemning someone to a 5 second life of 5/10 pain.
Also voted no, felt like pulling the lever would mean I took part in the inevitable killing, even if the outcome was the same. It would also remove precious time needed for the victims to recount their lives and come to terms with their situation before dying.
I feel like this is easier if you’re a non-lever-puller. If you think it’s wrong to pull the lever if it causes anyone’s death who wasn’t dying anyway, then most of these are easy.
Except the “you can’t see the track” one I suppose :P
I think it's never right to pull the lever, unless there aren't any humans on the other track. I'd sacrifice an animal or objects, but people should never be sacrificed for other people, unless they actively choose to.
“The court also found that the act [shooting down hijacked plane] is incompatible with the constitutional right to life and the human dignity. The act would turn passengers and crew of a hijacked plane, victims themselves, into "objects" - not only to the terrorists, but also to the state, which does not have the authority to kill innocents. If their deaths would be used to save others they would be reduced to mere "things" at the pleasure of the state. Further, the court believes that the arguments of the federal government, saying that passengers in such a situation would die anyway, are invalid, as human lives deserve protection regardless of the expected duration of their existence and that it is impossible to fully assess the situation leading to an eventual invocation of the act.”
Extreme situation that could be a counterexample to your claim:
Dr. Evil is 90 years old, his consciousness is slowly fading away, and he is going to die in a week. He is almost ready to release a bomb that will torture kill half of the world population. If you pull the lever Dr. Evil will die today and you will save all those people. You are the only one who can pull the lever. Do you pull the lever, or do you stick to your principle of not sacrificing other people who don't want to be sacrificed?
I wonder what would happen for the bribe problem if the rich man wasn't on the track at all. I feel the unfairness of comparing the relative value of rich life vs poor life compels people to reject the bribe. But if it was simple bribe to kill with the other side being empty, then it becomes a measure of the worth of a life.
I justify it like this: a world where well-being is decided by wealth is unjust and undesirable, so trying to enjoy your privilege is a moral wrongdoing because it very directly prevents the spreading-out of wellbeing. This makes it equivalent to the littering problem (in mode, not scale), where I chose to kill the litterer.
I'm not confident, but that's how I rationalize it. I'd love to hear any thoughts on this.
The thing is that when you pull the level, you're interfering so you're the cause of the consequence after pulling the lever. If you don't pull the lever, you're not responsible for what happens, at least in my opinion. That's why in the basic case of a trolley heading towards 5 people or 1 person, I wouldn't pull the level no matter what because at least the deaths of the 5 people aren't my fault, but the death of the 1 person would be my fault.
I was really surprised majority would pull the lever to kill the litterer versus the inaction killing good citizen. I don't consider myself the moral authority to sentence someone to death for such a minor crime. There was a substantive barrier before I would take action but I would take action when it was significantly weighted.
Apparently we’re in the minority together. Cynically (and practically) there’s also the legal side to think about. If you cause a person to be killed by a trolley, you will at minimum be sued into oblivion. If you do nothing no legal case can realistically be levied against you.
Edit: a fun part of the trolley problem not really explored here is how inconsistent most people’s logic gets when the outcomes are effectively the same but the actions are slightly different. For example, what if instead of pushing a button you had to shoot the person on the tracks? What if you could kill someone today who is going to shoot up a mall tomorrow? More fun, what if you could push a button to kill a millionaire and redistribute their wealth to save 100 families in subsaharan Africa?
Logically, as soon as you go down the path of actively choosing to kill a person to achieve some “greater good”, you’ve really thrown an awful lot of morals out the window.
Isn't a standard alternative, you can push a very fat man on the track and derail the trolley. All of a sudden a lot less people are willing to sacrifice the fat man.
Despite the fact that a man fat enough to derail a trolley, if even possible, likely has an extremely low quality of life and an extremely short life expectancy.
>If you don't pull the lever, you're not responsible for what happens, at least in my opinion.
I disagree. I think there's some point at which minimal effort from you for appreciable reward for others puts an obligation on you. And that's precisely the reason the trolley problem is so popular.
Don't think I got that far but the thoguht process is not hard to follow. One person is gonna die either way so it might as well be the "better" option.
Except it's not up to you to decide which of these two will die, both lives are equal and you must perform action to kill the littering guy, if you do nothing someone will die anyway, it just won't be the littering guy.
Heck yeah it's my decision! I don't want to live in a world where decisions are made like this, but in this case we must accept our role as the lever puller. Or walk away I guess, but that's no different from actively condemning the litterless person.
Neal makes some great and funny pages with great visualizations, but being a data freak I'd like to see more statistics. It would be great if we had a summary page where all questions and detailed results would be presented, even better with a distribution showing time taken, to see how "easy" each decision was :-)
> Lizardman’s Constant is an idea proposed by Scott Alexander that each poll always has about 4% weird answers. In one poll, 4% of Americans said that reptilian people do control our world and, in another 4% answered ‘Yes’ to the question ‘Have you ever been decapitated?’
That’s strangely a whole 3 percentage points higher than the number of Americans who self identify as ‘evil’ (on the D&D alignment chart) - see 4:47 at https://fivethirtyeight.com/videos/where-americans-fall-on-t..., although at this point we’re probably well within polling margin of error.
You fool, Amazon will refund you the cost because it's late and you'll get your package in a few days regardless. You threw away a lifetime of free Amazon stuff!
Why is that so hard to believe? There are innumerable stories of people sacrificing themselves for other people. Everything from throwing oneself onto a grenade to having kids.
Unless you chose every death-and-suffering-maximizing option for these absurd trolley problems, in which case you will be remembered for your reign of trolley-derived terror.
I'm a little confused by your rant when they clearly stated having kids as the sacrifice here - which makes a lot more intuitive sense than whatever antinatalist position you're projecting here onto them.
Also, while I'm firmly pro reproduction, this all sounds more than a little judgemental and presumptuous about the motivations of those that chose to not procreate.
I selected that one. Not because I'm particularly selfless, but because I think it's morally wrong to throw the switch whenever doing so amounts to killing someone, even when inaction results in more death.
I don't make a conscious choice to take action that will cause a stranger's death (even if more people die because I didn't take action) with exceptions for my monkeysphere. In the case of the worst enemy one, I didn't hesitate to run them over. In the case of self preservation, I chose to save myself. In the case of my best friend, other people are gonna die. Etc.
It makes a lot of these pretty easy and a few (like first cousin vs 3rd cousins) a matter of happenstance, just like the happenstance that I was looking at these absurd trolley problems.
I'm curious how far you're willing to take that, e.g. if instead of a couple people on a track it was "press the button to confirm destroying the bomber and pilot or let the nuke drop on NYC due to lack of confirmation" counts?
That completely changes the question because it gives you obvious deniability, whereas the lever probably does not. Especially if witnessed.
That matters legally (accidental death) versus intent (mens rea, murder).
It also matters socially, because you can deny that you made any choice even though you may have chosen (assuming you only do it once). You can even deny it to yourself.
It does beg the question: who set the trolley up to kill people, and why did you happen to be standing on the trigger?
I think inaction refers to intent, not the way the wind blows. In the case of the hair trigger, I’d be morally obligated to stand very still, but it wouldn’t be particularly blameworthy if I happened to falter. This is in contrast to the normal switch, where I’d be to blame if I pulled it.
I tend to feel that is the general answer, but that at some (arbitrary) level of absurd disparity of outcome, it becomes morally imperative to act.
Like that 5-to-1 standard trolley problem is a hard question for me to answer, and on different days, I pick different answers. Probably "do nothing" most days. But if you made it 50,000,000-to-1, it is not a hard problem any more. Somewhere in between those two is "the line" where the fact that you are becoming a murderer is outweighed by the value of the life you saved.
Yes, if I could be certain that I could save 50,000,000 lives by murdering someone, I think it would be a moral imperative to do so. That wouldn't change the moral evil of murder, it just outweighs it.
I think. Don't ask what I'd do "in real life", I won't know until it happens (which I pray it will not)
I too was surprised. My gut reaction is this is "my guy" syndrome where people are still reasoning in the third person even though the problem says "you" - if you were to mad-scientist this experiment I bet you'd have a near 100% self-preservation response.
On the other hand, some of the responses have a small contingent disagreeing, and to explain those I would remind you of the Lizardman Constant [1]. That is, some pollsters just want to watch the statistics burn.
I don't think 59% of people would succeed in doing it and I say that as part of the 59%. At the same time I thought it was the "which is ethically preferable" answer which is what I interpreted the questions being about.
I'd sacrifice myself for five clones, not five random people. Or for the robots. Robots and clones have got it tough. Random people are probably dicks, and I've not been a great person either.
The fact that its so low is kind of disturbing, and the fact that you're so cynical about it being so high means maybe you should reassess your values.
Values don't mean anything if you're dead. Five complete strangers? Knowing how terrible people are, they would probably save themselves too if they could. Every man for himself.
Life experience that would make me want to want to die if I was in this scenario? I sincerely hope I don't experience anything like that.
But I'm glad that innocent-minded people like you exist. Here's a question, if you had the choice between killing me or yourself, who would you choose?
"you" are not the thoughts and emotions that happens to reside in some sack of meat. Or at least that's not a complete picture. In some sense "ask not for whom the bell tolls; it tolls for thee" should be taken literally. 100% individualism is not the only mode of being a person, and not the default or "best" mode.
These are mostly gut feelings and unfinished thoughts. I'm not sure about any parts of it except "it's not a complete picture". I'd love to hear any thoughts on this and reading recommendations are welcomed.
A trolley is heading towards a line of people corresponding to every natural number. You can pull a lever and divert the trolley towards a line of people corresponding to every real number between 0 and 1. What do you do?
But actually, if you choose "pull the lever" after having done the maths, then you are probably a misanthrope. Either that or you believe very strongly in the theory that if you do nothing you're not involved. But really, you're just a misanthrope.
hmmm.. i'm not a very mathy person but seems like an interesting question.
Aren't they both just infinite? There's no such thing as a bigger inifinite?
thinking about it out loud here tho I would say that every natural number is a BIGGER number than every real number between 0-1 because, well, 2 is bigger than 0-1.
but then again since you said corresponding to every number.... 0.000000to infiniitiy -1 is a number in itself, so, you have infinite numbers, along an infinite number of 'places'.
Final Answer, you save lives by natural numbers, because real numbers have an infinite number of 'digits' per 'places' & those 'Places' are equal to all natural numbers.
Take every natural number, assign a unique real between 0 and 1 to it. Put all of them under each other. Now take the 1st digit of the first number, 2nd of the 2nd and so on, set it to a different digit than the one you took.
Now you have a new, unique, real (Because it differs in at least one digit with every other real taken so far) but no more natural to assign it to.
"A fundamental theorem due to Georg Cantor shows that it is possible for infinite sets to have different cardinalities, and in particular the cardinality of the set of real numbers is greater than the cardinality of the set of natural
numbers."
There is one person in front of the trolley, and five people on the side track. But nobody knows that you saw the five people, and there's video of you noticing the one person.
"Do Nothing"
No, you have to click button to do nothing(two levers), this is an incorrect demo, it should have had a timer or something similar(so "doing nothing" would e synonymous with doing nothing).
Since our trolley is at rest, I pose that the best choice is to do nothing(really nothing,not the lever marked "do nothing") so it never runs over anyone.
I pressed "do nothing" twice before I realized this, so you can rightly call me a fool.
I thought it was interesting that so many people chose to pull the lever and become personally responsible for the outcome in many of these scenarios. Like the ones not involving people dying for instance, I don't want to be personally responsible for destroying 1 trolley just because some idiot had set one on a path to destroy 3, I can just see the shitstorm some stupid company will kick up in order to make me pay there for example.
But if you are by the lever and know the outcome of pulling it (as is implicitly the presumption in these thoughts experiments), are you not personally responsible for choosing the outcome either way?
So I first disagreed with majority at level 10 mercy, I mean if they are dead anyway and trolley running faster only might make it less painful without guarantee what is the point playing the God, they are dead anyway and nobody will guarantee it won't be actually worse if I do something. This ain't exactly euthanasia with drugs proven to work.
Level 12 best friend - I am surprised majority value life of one best friend more than lives of 5 strangers, in the end it's still people and 1:5 seems like nobrainer decision. Now, if it would be family I would agree with majority, but just the friend ain't family and heck even for the family I'd maybe have to consider whether it's old grandma vs 5 young people.
level 13 cant see - well they should then choose different picture, but from picture it seems very likely you can save 5 people instead 1 even if you are not exactly sure
level 15 age - I'm surprised most of the people are ready to sacrifice 5 elderly people (define what age is that, whether they have still 10-30 years of life ahead) and let 5 families to suffer instead 1 baby which still doesn't have even self conscience and family can have other baby and they are not used to it for years
level 17 math - i'm quite badd with these odds and math so I went rather with doing nothing if not sure and was with majority
level 19 economic damage - agree with majority, but don't understand why would people rather destroy 900K worth trams instead 300K worth trams
level 20 external costs - I don't think it's up to me to decide in what order should be trams replaced, also they dont say whether will be the tram replaced at all, way too many variables to consider which decision is better
level 22 - wow, majority of people are mean pranksters
level 23 - TIL majority of people is willing to sacrifice someone's life over littering and rather play God, I'd say 1 life equals 1 life and I won't really change direction of tram over such small thing as littering, now if you would say that person is murderer or drives dangerously that would be different
level 25 lifespan - is one life really worth ten lives each with at least 10 years ahead for majority? 10 years is quite a long time, 50 is longer but it's still just one life over 5
Andrzej Sapkowski in the Witcher books, through Geralt, said something that might apply to this situation: "Evil is evil [...] Lesser, greater, middling, it's all the same. Proportions are negotiated, boundaries blurred. I’m not a pious hermit. I haven't done only good in my life. But if I’m to choose between one evil and another, then I prefer not to choose at all"
Agreed, I think the utilitarian approach that goes something like "let N and M be two integers so that N > M, then N human lives are worth more that M human lives" (with subtleties related to age and so forth) is very, very dangerous. It can lead to all sorts of atrocities in the name of some greater good. My approach is to never pull the lever unless 1) I would get killed or 2) humans would get killed vs something non-human on the other track.
This quote is very popular, but the moral of the story you are referencing is the opposite. Geralt rejects this point of view at the end, because inaction is also action.
I think is so popular because of the Killing Monsters trailer.
He does end up actually making a choice. But the choice is not between killing Renfri and Stregobor (which was the initial lesser evil choice) but he chooses to stop Renfris' gang from killing innocent people to draw Stregobor out. Renfri could have then left and look for another opportunity to kill Stregobor. Geralt told her at least two times to leave and she refused saying she made her choice, attacked Geralt and died. So Geralt ended up killing Renfri but as a result of her choice, not his. And he did stop Stregobor from taking her body and study it (which was his initial intent). So he did definitely not take his side. I would argue Geralt chose between good and evil, not between two lesser evils.
One interesting thing about the trolley problem is that while most people won't pull the lever when the people involved are all random strangers, most people will if the people are are all relatives. That makes me think that, subconsciously, part of the issue is that if you pull the lever you're afraid the victim's friends and family might blame you for the death and seek revenge but if you don't the friends and family of the victims will all go looking for vengeance from whoever owned the trolley and you'll be in the clear.
I certainly wouldn't direct a careening trolley from 5 sleeping humans to 1 awake human who would feel terror before dying, but if it were cows or something like that I certainly would.
A potential reframe of your initial comment on subconscious thought is that we know the morals and values of our relatives better and know or think that they would be willing to sacrifice themselves whereas we have no idea the values of random strangers.
I never understood how the trolley problem applies to real life.
When is there ever a situation where you have perfect and complete information about a scenario? Only in thought experiments.
The situation where the boxes are probabilistic was more realistic. You’re never sure in real life that you will certainly do less harm if you pull vs not pull.
The original trolly problem showed that people find it less repressible to kill people when no action is taken. It's presented both ways. "Pull the lever and kill five to save one" and "pull the lever to kill one and save five". You'd think that people would answer to kill one person in both cases, but it turns out people are biased against taking action (pulling the lever) regardless of how you word it, and so when it's worded as kill five if you do nothing, a lot of people will do nothing.
That's not a bias, that's completely logical. The only thing in this world you can control is your own actions, so it's completely logical not to take an action that will kill someone.
Think about it: The number of inaction's you have is infinite - there are an infinite number of different things you could have done that would have saved someone's life. So an inaction is the default state in this world, but taking an action: That's something that has meaning, and people chose not to kill by taking an action.
Aside from the above, you are assuming that 5 people is worth more than 1, but that's not something you can know, you can't know the worth of another person. So who are you to choose to kill someone?
I would argue the people’s choices are appropriate.
In real life you never get perfect information, so doing nothing as opposed to pulling represents the humans uncertainty of the parameters in the scenario.
I don't think it's supposed to apply immediately and directly to real life. But it's supposed to use moral intuition to highlight the balance of various philosophical frameworks (mostly consequentialism vs deontologicalism).
One common real-life debate where these moral trade-offs often comes up in real life is voting choices in our first past the post system.
A consequentialist might say that you should vote for the lesser of two evils that has a serious prospect of winning. That, while they might not be someone you support in the abstract, in the interest of harm mitigation you might give them your vote to deny victory to the greater of two evils.
Whereas a deontologicalist might say that voting for a candidate that you actively dislike and consider evil is a harm in and of itself, and that you should not cast a vote for someone you wouldn't want to see in office. Even if that means the greater of the evils ends up winning.
When I was younger and struggling with this question, the trolly problem was informative to me, and has generally lead me to a harm reduction strategy when it comes to voting (while also advocating for a reform of the voting system).
That question is actually really easy. No party represents the population. There aren't any meaningful choices to be made. Thq as weere is no such thing as a wasted vote if there is no party worth voting for.
They all suffer from the same flaw. Especially the ones near the left are particularly hypocritical. My hunch is that they depend on poor and angry voters so there is no way they would ever try to achieve prosperity for them while risking to lose their voters.
The point of the trolley problem isn't to answer the trolley problem (either globally in the framework of moral absolutism or locally in the framework of "what do you personally value?"). The point of the trolley problem is to scaffold the discussion about how you answer the trolley problem and other moral dilemmas. If you prefer to switch the trolley to actively kill one person over passively allowing five to die, the interesting point is not "that's absurd, the real world doesn't work that way!" but why the real world doesn't work that way. What else goes into our decision to not harvest one person's organs to save five others?
It's not that unrealistic. Imagine driving a car and a kid runs onto the street. You can swerve onto the sidewalk but there are people walking there you'd hit. The question if you'd take action to save someone to (potentially) kill someone else is interesting. And it has been discussed in the context of self driving vehicles.
But from all I remember, the outcome always was that an AI should never actively make a decision to sacrifice someone. And that's also how I view the trolley problem. Actively making a decision feels worse for me, even if fewer people die in the end.
I feel like that's not realistic either—when you encounter these sorts of situations in real life, you don't really make a thought out choice because it happens so fast. People swerve and crash into trees trying to avoid rabbits, it's not a reasoned thing.
These problems do, however, come up fairly often in so many other areas. Cryptography is probably one near and dear to many people's hearts here—supporting cryptography directly saves many lives (journalists in totalitarian regimes, people in abusive relationships, the general wellbeing of people being able to communicate securely for a myriad of industrial purposes, etc.) but it also, to a lesser degree, directly leads to some deaths (terrorists are able to organize and hide their plans). Do you spend your life developing more powerful cryptographic algorithms knowing that it will have some small negative outcome that your work is partially responsible for? Or do you do nothing at all and have a larger number of people suffer as a result of you not having produced a work.
In real life, though, you never have either perfect information about whether your action (or inaction) will have your intended consequences nor do you have the time for complete/rational analysis.
It's unrealistic because the trolley problem is basically a torture apparatus that you're instructed to operate for unknown reasons. Our decision to pull levers or not, means nothing in the context of why the situation exists in the first place, and why you have the job of executioner.
We have time to think and weigh things up in regards to pulling the lever. But in the driving scenario, a split-second decision is needed. Muscle memory and reflexes come into play. Everyone will apply the brakes anyway. There is no "do nothing" option in the driving scenario.
Agreed re actively making a decision: my answer was mostly not to pull the lever, and if I analyse my feelings it's because then the results aren't my fault. Pulling the lever makes me party to the situation.
Disagree re realism though. You never get a chance to think in real life; and any situation in which you could deliberate what the moral course of action is, is a situation you could avoid entirely. Really the closest to trolly problems in real life are public policy decisions - which are real and affect all of us. So yes I guess I've argued myself around to agreeing again :).
It really does apply, but maybe not always for obvious reasons.
I really would encourage everyone to watch Michael Sandel's political philosophy lectures (titled "Justice") at Harvard University. It's available on Youtube.
The covid pandemic is a trolly problem. Lock down and cause some hardship (including maybe some deaths) or let the pandemic spread and kill lots of people?
The combo tradeoff between saving tens of thousands of lives of people who had, on average, 6-12 months of life expectancy, versus potentially shortening the lives of millions or billions of people by only months or weeks, and not seeing those effects for 70-80 years, seems to have been part of the covid shutdown tradeoff calculation.
Granted, it has all the additional complication of the uncertainty of the real world, but there were a lot of claims of, "I can't believe you won't shut down for (weeks/months/years) to save grandma!"
I wonder, in the end, if those decisions will end up having saved or cost more total days of human life.
How many people's lives are you willing to shorten by even a week or a month due to increased stress to save the life of someone whose comorbidities will likely kill them in 6 months anyway?
"High risk," yes, but when they were reporting on people who were actually dying of covid, for most of the first year, the data was that it was people with comorbidities, people who already had significant health conditions. It wasn't the 70yo marathoners who were dying. It was the ones who had CHF or COPD and obesity who were. (OF COURSE there are exceptions to this, but the majority of people who were dying were not otherwise healthy.)
Stress. The kind of social, personal, financial stress people experienced, especially the "essential workers" who had to continue working in person, or people who were already struggling to make ends meet who suddenly didn't get paid what they were expecting, went through significant life stress events.
Or around the world, millions of additional people have been added to the ranks of the "food insecure," who now run increased risk of malnourishment and starvation.
There's certainly an extremely privileged cohort, probably heavily represented on HN, for whom transitioning to WFH was no big deal or actually a benefit, but we're still feeling the effects of supply chain disruptions even in the wealthiest countries, and there are millions (billions?) of people whose lives are much more severely impacted.
I think it would be better if the choice was both ways. Like the track was going to an end and you had to make your mind up. I think this is biased towards doing nothing.
68 frags for me.
We had horrible exercises like this back in school. But then it was often mentally disabled people versus healthy people.
Huh I never thought of that possibility.. I did think of the possibility that earth would be totally obliterated or no human population but not the possibility of only 5 to 50 or so people..
These are presented as ethical dilemmas, but are they also legal dilemmas? Example 1, for example. If you can clearly see that one person will die if you pull a lever, and he will not die that if you don't, is that not murder or manslaughter? If not, why?
It's a fun thing to think about in a no-personal-consequences sort of way, if it were a real life situation you bet nobody is going to touch that lever with a 10 foot pole otherwise they're likely going to prison for a long time with how modern prosecution works.
It was interesting, although I was sort of expecting a more detailed summary of the results at the end (even just a recap of all the percentages, to see in what questions I deviated the most from the majority, would have been nice).
Well, that was fun. My internal geek, however, is annoyed that it seems to miss the point of the trolley problem - that of you do nothing, you're not responsible. Answers seem to indicate that people don't get that.
I know there are flaws to my reasoning, but I can't shake the thought: How/Why those poor people ended on the track is more important to fix than to focus on the responsibility of the guy pulling the lever.
I also take this stance. In the real world, that would be the top concern, not the bystander who is thrown into the situation. Or if the situation is unclear, better run away instead of getting involved in a mess which you didn't cause.
If someone says "ignore all the real world concerns", then does it even matter? It's the death of fantasy people, who cares?
But I would take an exception if it were family or friends. In a real scenario, saving them would be worth more than saving strangers or avoiding legal problems.
My reasoning is: say that bad city design leads to a lot of pedestrians about to be struck by cars in intersections. Does it really make sense to focus on the drivers?
The premise of trolley problems is that you can’t generally prevent situations where you have to choose between two evils. But we’d like to have morality rules that allow us to make the “right choice” in any situation. It turns out that it is really hard, if not impossible, to have such a rule system that (a) allows to derive the “correct” choice, that (b) is logically consistent, and that (c) people agree with all its results.
I chose the route of harm minimization, given this forces an answer. My kill count was 44.
We really should have 3 answers: 'refuse to answer', 'pull lever', and 'dont pull lever'.
Don't pull lever- is a choice that you make, and acknowledges the intent in doing that action.
Refusing to answer- is not accepting the responsibility forced on you.
As a comparison, it's when some terrorist on a movie says to someone "You can decide who is going to die". The better answer is to not choose. If they force that with multiple threatened to be murdered, then you choose yourself.
Ok, very curious of others' metrics and hueristics for actions that are defensible. Mine are probably composed of:
- Net degree of suffering imposed (+ nth-order suffering imposed on eg. family)
- Thus, nervous system pain receptor net effect (cat>lobster)
- Age of individuals and thus life-years saved (baby>elderly)
- Weight of burden of (in)action on own life (++suicide risk)
- Confidence that the train will act as proposed
- Value of human life being absolute over non-life (human>bigMoney)
It’s likely the universe is deterministic, but that does not mean everything is predetermined: it just means the universe is mechanistically governed by certain rules.
Those rules create a fractal spectrum of causal events not unlike those seen in the Game of Life. From the interplay of those causalities emerges an ordered bubble of discrete, relatively-localized energy-matter we perceive as our physical universe—and possibly countless others like it.
There’s only one of them that really gave me a lot of pause. 5 elderly people or one baby? Not necessarily that specific problem itself, but the Sorites paradox that is implied.
What about 5 elderly people with one day of natural life left vs one baby with an expected lifespan of 100 years? Are we trying to maximize total time of human life? What if it’s low quality time though? 1,000,000 elderly people that had one hour left to live or one baby?
I think the baby is the logical choice for utility maximisation: he still has a chance to develop in anything, maybe he could become the next Einstein or save the world. But the 5 elderly's contributions to society are probably over, however large those had been during their time.
Level 10 (pulling the level speeds up the trolley, reducing the suffering of the 5 people who are going to get run over anyway) is basically the argument for legalizing assisted suicide.
Of course, I pulled the lever (I guess it's a bit different in that I didn't get consent from the people, but presumably the difference in their lifetime was miniscule, and the entire remainder would have been dread of impending death anyway)
This is an interesting one that really drives home just how dangerous it is to try to learn from trolley problems.
Yes, if you have perfect information, speeding up the trolley is likely the merciful thing to do.
In real life, you never have perfect information. Speeding up the trolley is definitely the wrong thing to do because it reduces the amount of time that you and others have to find an alternative solution that might save somebody.
(I still agree with your point about assisted suicide. Our points are compatible.)
I think if you choose the option to kill 5 people 100 years from now, that 5 should not be part of your total kill count, because those people won't be dead for 100 years. In fact, it is possible you could commission a brick wall to be built somehow in front of the portal opening at the other end to divert the trolley in time, sparing those people, since the future (in my opinion) is not certain.
But what if in 100 years most of humanity dies, all that's left is a group of 5 survivors that took shelter in a trolley station. Then all of the sudden a trolley appears out of thin air and kills all 5 of them? You would be the man/woman that ended human race :)
The measured side of me always wants to err on the side of inaction.
The one about life savings is a difficult one and I think at a distance without the peer pressure or emotional bias of a high pressure situation I would certainly choose to save the life savings.
More interesting I think would be a question involving family members. How many people do you kill (by pulling the lever) to save a parent, or your child?
In the same genre there is the recently released Trolley Problem Inc [0] though it starts out in the same way it have some interesting things happening throughout the game.
It's easy for me: never, under any circumstances operate railroad equipment. You can't possibly know what is on the other track or what it will do to the train, you could kill thousands. Even if railroad ceo is standing besides you you have no way of verifying it. Seriously, don't touch railroad equipment, people will die!
This falsely puts the burden of responsibility on the lever man, while it is those who tied the people to the tracks, launched the trolley and forced the lever man to incur all the blame are the real culprit. Making people believe the contrary is the fundamental principle of operation of all politics.
It's curious/ironic that instead or teaching us moral philosophy about ourselves, boundary testing trolley problems becomes a useful study for the purpose of social engineering.
The problem with any kind of invisible hand is that once we talk about it, it becomes visible and loses its autonomy.
There was a trolley meme (actually about student debt forgiveness) that had a long line of people dead, more live people, and the cut off to no one. The dilemma was, “You can pull the lever, and save everyone, but would that be fair to everyone that was already killed?”
These trolley problems are the stuff of nightmares wherein one becomes a world leader.
If all outcomes are horrible, then your lever pulls or inactions will always be subject to valid condemnation. If you take a risk, you are judged by the outcome, not whether your choice was a good calculation.
Minor UX, could be just me, sometimes I got the “pull the lever” and “do nothing” options confused and did the wrong thing. Would be interesting if instead of pulling the lever, I could just pick the option, either underlined in the text, or as a choice.
The original/vanilla Trolley Problem is already absurd and not a legitimate problem at all. Not pulling some lever is just as much of a choice as doing it. The only salient difference is that less people are harmed, so the choice is obvious.
In that case I would stop them, but trolley problem is more convoluted. If I do something I'll need to take responsibility, if I don't do anything the trolley company has to take the responsibility. Related: https://i.redd.it/9lbq0hx2zkk81.jpg
If pulling the lever is going to kill someone, I never pull it. The only hard one is if pulling it will save my own life, I probably would in that case but it would be wrong. Pulling the lever makes you a murderer instead of a bystander.
> Pulling the lever makes you a murderer instead of a bystander.
Ah that, I'm not too sure about. If you have the means to avert a disaster and divert the trolley but consciously choose not to do so, you'd be a murderer either way right? The baby question really hammered it home for me, most respondents chose to have the trolley kill the elderly people instead of diverting it to the baby. Sure, you stood by and did nothing but you're STILL a murderer no matter the outcome. The minute you stand by the lever you consciously choose who's going to live or die, regardless of whether you do nothing or pull the lever. Inaction is action.
So, the progression I like that got me where I am on this, ultimately silly, exercise goes like this:
1. The trolly problem, 5 v 1 with a lever.
2. The roller skater:
There's a trolly that's going to run over 5 people. Someone is skating next to the train tracks in enough pads that they would stop the trolly if they were on the tracks, do you push them into the trolly's way? Saving 5?
3. The brilliant doctor:
A brilliant doctor, on the cusp of a discovery that will save the lives of millions has a heart attack in the OR one night and needs an immediate heart transplant. Due to a contrived morality problem, you are locked in the hospital with no way to get a fresh heart delivered. In the room with you is a stranger who has the right blood type. Do you cut their throat and take their heart to save the doctor's life, guaranteeing that a million people will get life saving treatment over the next five years?
For me, once you get there to the end, it's obvious that it's not right for me to choose to end one person's life for the potential of saving others. Trying to add up lives on the balance of a scale is an impossible effort. Better not to kill anyone. I didn't tie anyone up on the tracks.
(I'm not a philosopher but I think that this is a Kant vs. Utilitarianism kind of argument, in the end.)
P.S. A friend of mine poses a different moral puzzle: You are walking by a pond and see a child starting to drown, do you jump in and save them?
There are people metaphorically drowning all around us and yet usually I do nothing. Maybe morality puzzles aren't that useful for navigating the real world.
I think the trolley problem serves to highlight how important it is to focus on solving real problems rather than hypothetical problems. The real world usually offers addition information that helps resolve dilemmas.
But if one real world problem was tied to the tracks, and you could flip a switch to a different track where five hypothetical problems were tied to the tracks, would you switch it?
People wouldn't laugh about these "silly" and "fun" moral problems if they had been forced to push their own mother into an oncoming train to save their son's life, like I was.
Aside from the kill count calculations, the hardest thing for me here would be actually to be in the "position of power". Who are we to decide such things, even if we have some info (or think we have).
Anyone who chose to destroy their life savings (like I did) is totally lying. I could also choose to save hundreds of lives and destroy my life savings by buying bed nets, but I don't pull that lever.
56, that was quite fun. Pre-corporate internet vibes. It'd be cool if there were a way to see a one page view of the stats at the end (and where you fit with each) instead of having to redo to see again.
I dunno, but I did. It had an empty ring to it though.
I don't feel like one can solve Philosophy. I've never thought of philosophy as a solvable thing, like all the philosophical conundrums I've encountered were specifically designed not to be solved, but only contemplated. And not just koans, either.
By the time it got to the trolley sacrificing more trolleys section (LOL) am I the one who was thinking "I don't care if this is the wrong call, fuck all these trolleys!" :)
did anyone else wear their unphilosophical sadistic karma-denying internet-can't-hurt-me hat to intentionally skew phony statistics because they thought everyone else would do the same?
No :-) Come on now, there's no good reason to answer random/opposite from what you would normally do, just to skew statistics. Obviously this is the internet we're talking about, so there are bound to be "silly" answers and these count as well. We can't all think the same and answer the same ;-)
UI bug report: on my iPad, the pictures are on top of the text. This is true whether in landscape or portrait and in a few cases caused me to not understand the choice.
You can do nothing and kill one human, or pull the lever and kill one alien who can experience consciousness, feelings and pain at a higher level than ours.
Any third choices ? Maybe pulling the lever hallfway derail ? If it is fast, that means more deaths, otherwise, you may have some injuries but everyone survives ?
This needs a variant with inversed level logic (i.e. where pulling the lever saves 1 but dooms 5). You know, to weed out the people who just like murder.
I do not understand why so many people would let the trolly destroy their fortune to save 5 random people. Why would I ever care about 5 random people?
Because of empathy I guess? I can go on living perfectly fine without my savings ("fortune"), without losing any quality of life really. So if I can make the lives of five people's families and loved ones better, why wouldn't I?
Cause most likely they will be thankful and reward you financially and in the long term you might get your fortune back and then some. You have to play the long game! xD
Eh, I decided not to blow them up because the notion of perfect knowledge is foolish. Those folks have an eternity to figure out how to stop the trolley or jump out. They'll manage.
Also: I'm not going to presume that I have the right to take such an important choice away from them. If they tell me to pull the lever I will, but not before.
Chickens are sentient. Would you have changed your answer if it were chickens?
I chose to save the human as the prompt didn't set a bar for the level of sentience. Also partly because of shameless speciesism. I love animals, have gone mostly vegetarian, but I would still save a human over any other animal.
Would you save a fit, healthy monkey that can communicate with us using sign language, or would you save a human in a permanent coma? (Or whose brain is damaged and will only ever have the mind of a 1 year old)
Lots of animals die due to our existence and not just for our food. We accidentally kill things all the time with our vehicles and the pollution we cause. I wish it wasn't the case but you can't step outside without ruining some ants day.
To then be faced with a choice between killing one more animal or a human... my heart would bleed for the monkey but to do anything else would be murder and pretty hypocritical unless one lives an extremely vegan and eco friendly lifestyle.
No medical issue will strip away the fact that it is a human life. I'll make it more absurd for you... they could have a terminal disease and likely be dead tomorrow and I'd still choose to sacrifice the monkey to give them 24 more hours.
I cannot really say what I would do if I was a monkey, it's like asking me what a circle with corners would look like. Too far outside my ability to imagine. I wouldn't sacrifice any number of humans to save any kind of robot either, unless it was a robot critical to human survival like a surgical robot in the middle of a natural disaster.
I chose to spare the robots. Not for kind reasons, mind you, but rather because I did a "maximize human death and suffering" run (and it's fascinating that the minimum percentage of agreement throughout said run was 10%).
I saved the robots, last thing I want is their sentient robot friends and family going on a rampage to destroy all humans because we didn’t care about them enough.
To really be sure, I think you need to test for the switched case as well. ie. would you do nothing and kill a human or switch and kill sentient robot.
I strictly don't believe in sentience being possible for robots. I think robots can and may eventually pass the Turing test to a meaningful degree - but only on the end of imitating/fooling us.
So this question is more like would I like to destroy a couple of expensive robots or kill someone. I chose to 'kill' the robots. Some might see this as cheating or not in the spirit of the question being asked - for me, I can't and will never concede 'programmable sentience' as any sort of reality.
Yep, it's essentially the same question as the cat vs the lobsters. Most chose to save the cat, and that's not exactly surprising given the western context of cats as companion animals. We just value some lives more than others.
A clone of you, isn't you, it is by definition a copy.
You do not possess their memories when they do something else, you do not control them, you do not have their consciousness. It isn't you. When you cease to exist you do not "live on" (in the physical sense) through them, you are dead.
The idea I mentioned is hard to explain in a comment, but I’ll try to message the essence, at least what works for me.
Which is, there is a chance (and further reasoning bases on it) that we all don’t feel as a single entity only because we are physically separated by low-bandwidth channels, e.g. visual, auditorial, text. Opposed to parts of our brains, which are connected by high-bandwidth links. This creates an illusion of self, while we all are actually the single phenomenon divided into islands. Under this idea the thing that matters (to some) is your own physical structure, not from which eyes specifically “you” are looking at the moment, because “you” are everywhere.
Some call it the hard problem of consciousness, or a part of it, but details may heavily vary.
Hope that provides a glimpse into it.
Added: also curious, from your perspective, do you really die when Scott “beams you up”?
I had a total bias against intervening to change the path of the trolley toward one or more people (regardless of what was on the other track), but also a bias toward action if something other than a person could be run over instead, and I only got 69.
I'm surprised that "saved" 15 people compared to a bias toward inaction in general.
I thought about doing that too, but then I realized my best friend would never forgive me for choosing his life over theirs. He'd unquestionably wish to sacrifice his own life instead, and it felt appropriate to honor that desire.
Oh no! A pandemic virus is spreading throughout the world, do you take sensible precautions and get vaccinated when it's your turn, or do nothing?
What if you don't know exactly what the lever does? What if pulling the lever makes all the infants in the world cry for an hour straight? What if pulling the lever only saves a life one time in 1000, but it sprains your wrist 1 time in 10. What if the sensible precautions are very very annoying? What if you're not sure about the side effects of the vaccine?
Ugh, masks are not that bad. I literally went to a party and danced until 4AM with my friends in a N95 and it was fine. We only unmasked to do tequila shots.
If you're talking about vaccines, I have an all natural colloidal silver supplement to sell you.
What eliminates social cohesion and trust is
* the disease itself
* the people instrumentalizing the disease and/or the sanitary measures to partisan ends..
What if the elimination of the societal cohesion and trust was the product of a cynical, self-serving multi-decade political movement and propaganda campaign, and had only a little to do with pulling the lever, but the people who pulled the lever got most of the blame for it?
Like somebody else has noted, the trolley problem exemplifies that people assign different weight multipliers to proactive action vs inaction, instead of strictly comparing the direct expectation of both outcomes. Two real life applications of this that I find interesting are:
* Tech company employees who choose not to sell their vested RSUs (inaction) as opposed to selling them and diversifying into other investments such as index funds (action) even though the latter is economically more correct.
* Members of the general public who refused to receive the COVID vaccine (inaction) as opposed to getting vaccinated (action). In their minds, the risk of getting injured due to vaccine side effects is incorrectly weighted more than the risk of developing complications from COVID, because the former would be a consequence of a conscious action (i.e. "pulling the lever") as opposed to doing nothing (letting the trolley go down the default path of getting sick).
Surprised on the answer to the “worst enemy” one. I wouldn’t really call anyone I know personally my worst enemy, so I figured it’d be someone like Mao or Hitler, and of course I’d just have it run them over.
My worst enemy is the side of me that constantly doubts and berates me. But I guess I should accept that it's just trying to protect me and I needed it in the past but now I need something else.
I keep not understanding why this "problem" exists. If pulling the lever kills someone, then you killed someone. You can only do that if it's yourself or if the amount of people you save grants you some immunity, like "kill a guard to save a city"
In the vast majority of times you should do absolutely nothing: It only becomes your fault if you touch it.
The whole thing was nonsense. I never think any individual should be in the position of deciding who to live and who to die. The common fallacy would be something like let's pull the lever to save 5 by killing 1. So the question would then be how can anyone be sure it would be worthy to save the 5 by killing 1? To me, life is not a number, unless someone involved means something to me, I'll let it be.
The question regarding baby vs old people was really telling. It’s practically the argument for / against abortion. What’s strange to me is that someone would choose to kill five people who are capable of thought instead of an undeveloped human being. But I suppose that’s why we’re seeing the reversal of Roe v Wade being supported. Sigh, fuckin religion.
You're way, way overthinking it. The world isn't black and white, this is a game, and there's always more choices than just two.
I'm pro choice and my choice was to run over the old people.
In virtu's defence, it's difficult to avoid overthinking things in this day and age. We've been conditioned to look for the overlap with our politics in so much that it consumes many of us. Culture wars take heavy psychological tolls on the participants.
I think a common utilitarian way of thinking is years of life left. A baby has a far greater expected value of remaining years and thus is often assigned higher moral value by people. For me it was a no-brainer to kill the old people, although I was imagining them as very old such that they each only had ~5 years left on average.
Mostly agreed. But in another society that values the wisdom of their elders, then they could see that as a large loss. Where-as the baby is replaceable with very little cost to society. I wonder if the stats would be different in an Eastern society.
I think of it from more of an anti-natalist perspective. The child will have a far greater chance of suffering while the adults will have a chance for less suffering due to their expected lifespan. That’s why I thought about it in the frame of abortion. We’ll typically assign the notion of having a life positive value, but from my perspective that’s just a chance for more suffering.
I chose to read it as "would you kill a teenager, or all of that teenager's caregivers." I felt that sufficiently raised the moral stakes, to make both choices similarly hazardous, while fitting into the parameters written into the scenario.
It's a big difference that someone is already in the way. If you take no action they die anyway. If you suddenly collapse on the ground or get struck by lightning before you can pull the lever, they still die. It's not your fault that they died.
In that case, it is not really linked to abortion: the baby is already born. A convincing (to me) argument that I heard during COVID debates is that by saving a baby you save, on average, 80 years of potential future life; for elderly people, 5 x 10/15 years.
> Supporting abortion rights doesn't mean killing babies.
There is something very revealing about the GP's attitude though. They literally interpret someone wanting to avoid killing a baby as being the exact same thing as opposing abortion. So in their worldview abortion really is all about killing babies.
Eh, not so much. See my other comment to the sibling thread in regards to this discussion.
There are two parts to this discussion that people are pointing out that I could have done a better job explaining.
First that I’m confounding the trolley problem of killing a baby with abortion. I’m well aware it’s a thought experiment that’s used in philosophy. I understand that killing a baby is different than abortion. My main point which wasn’t articulated at all in the first thread was more from an anti-natalist perspective. The society I live in primarily assigns more value to the fetus and or baby than they do the people that already are living.
A lot of the argument comes down to whether or not the child will have a chance to do great things and to experience life. Killing the baby means stopping them from having this opportunity. However I look at it from the perspective of potential suffering and current ability to feel and think. Killing the baby really doesn’t stop the baby from thinking because well, baby. And more importantly killing the baby stops a lifetime of suffering.
Second, my worldview doesn’t contain that mentality. Abortion isn’t about killing babies. To me it’s more about stopping potential life. Whether or not that life has potential that is actually good is the part that I disagree with most of society.
I took it as simply 1 full life expectancy vs 5x remaining life expectancy of a senior citizen. Put a few more older people on the tracks and I'd pick the other switch.
Plus the child still has the potential to bring more life into this world, whereas the old people are now infertile (my definition of old being they are too old to procreate).
Those additional lives are hypothetical and in the future, and maximizing the number of humans alive might not be the right thing to do. IMO this is one of the really juicy questions.
Fair point. Given the decreasing birth rate in the western world, I figured we need a few more humans to pay for prior generations retirement. Only half joking…
Part of the Trolley Problem is that the choice is between an action or inaction. But these problems, or the first one at least, have two buttons.
So you're making a choice. Yeah, some might argue it's a technicality. But what if you put the Do Nothing choice on a timer?
What if you made it an asynchronous "Problem of the Day". No action by the end of the day triggers Do Nothing, etc.
Lots of cool, interesting design choices. There's an unstated subtle ultimatum hidde. in these problems, but still cool.
Sorry, geeking out.