Hacker News new | past | comments | ask | show | jobs | submit login

The Repugnant Conclusion is one of those silly problems in philosophy that don’t make much sense outside of academics.

Utilitarianism ought to be about maximizing the happiness (total and distribution) of an existing population. Merging it with natalism isn’t realistic or meaningful, so we end up with these population morality debates. The happiness of a unconceived possible human is null (not the same as zero!)

Compare to Rawls’s Original Position, which uses an unborn person to make the hypothetical work but is ultimately about optimizing for happiness in an existing population.

We really shouldn’t get ourselves tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content. That’s not the end goal of any reasonable, practical, or sane system of ethics.




Rawls's original position and the veil-of-ignorance he uses to support it has a major weakness: it's a time-slice theory. Your whole argument rests on it. You're talking about the "existing population" at some particular moment in time.

Here I am replying to you 3 hours later. In the mean time, close to 20,000 people have died around the world [1]. Thousands more have been born. So if we're to move outside the realm of academics, as you put it, we force ourselves to contend with the fact that there is no "existing population" to maximize happiness for. The population is perhaps better thought of as a river of people, always flowing out to sea.

The Repugnant Conclusion is relevant, perhaps now more than at any time in the past, because we've begun to grasp -- scientifically, if not politically -- the finitude of earth's resources. By continuing the way we are, toward ever-increasing consumption of resources and ever-growing inequality, we are racing towards humanitarian disasters the likes of which have never been seen before.

[1] https://www.medindia.net/patients/calculators/world-death-cl...


> By continuing the way we are, toward ever-increasing consumption of resources and ever-growing inequality, we are racing towards humanitarian disasters the likes of which have never been seen before.

We aren't doing that. Increasing human populations don't increase resource consumption because 1. resources aren't always consumed per-capita 2. we have the spare human capital to invent new cleaner technology.

It's backwards actually - decreasing populations, making for a deflating economy, encourage consumption rather than productivity investment. That's how so many countries managed to deforest themselves when wood fires were still state of the art.

Also, "resources are finite" isn't an argument against growth because if you don't grow /the resources are still finite/. So all you're saying is we're going to die someday. We know that.


I mostly agree. However:

> That's how so many countries managed to deforest themselves when wood fires were still state of the art.

It was mostly ship building that deforested eg the countries around the Mediterranean and Britain. Firewood was mostly harvested reasonably sustainably from managed areas like coppices in many places. See https://en.wikipedia.org/wiki/Coppicing


> By continuing the way we are, toward ever-increasing consumption of resources and ever-growing inequality, we are racing towards humanitarian disasters the likes of which have never been seen before.

What do you mean by ever growing inequality? Global inequality has decreased in recent decades. (Thanks largely to China and to a lesser extent India moving from abject poverty to middle income status.)

By some measures we are also using less resources than we used to. Eg peak resource usage in the US, as measured in total _mass_ of stuff flowing through the economy, peaked sometime in the 1930s.

Have a look at the amount of energy used per dollar of GDP produced, too. Eg at https://yearbook.enerdata.net/total-energy/world-energy-inte...


> Utilitarianism ought to be about maximizing the happiness (total and distribution) of an existing population.

That's a somewhat-similar alternative to utilitarianism. Which has its own kind of repugnant conclusions, in part as a result of the same flawed premises: that utililty experienced by different people is a quantity with common objective units that can meaningfully summed, and given that, morality is defined by maximizing that sum across some universe of analysis. It differs from by-the-book utilitarianism in changing the universe of analysis, which changes the precise problems the flawed premises produce, but doesn't really solve anything fundamentally.

> Compare to Rawls’s Original Position, which uses an unborn person to make the hypothetical work but is ultimately about optimizing for happiness in an existing population.

No, its not; the Original Position neither deals with a fixed existing population nor is about optimizing for happiness in the summed-utility sense. Its more about optimizing the risk adjusted distribution of the opportunity for happiness.


>We really shouldn’t get ourselves tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content. That’s not the end goal of any reasonable, practical, or sane system of ethics.

Are you sure you aren't sharing the world with people who do not adhere to reasonable, practical, or sane system of ethics?

Because, ngl, lately, I'm not so sure I can offer an affirmative on that one, making "Getting tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content" a reasonable thing to be trying to cut a la the Gordian knot.

After all, that very thing, "pump out trillions of humans because some algorithm (genetics, instincts, & culture taken collectively) says they'll be marginally more content" is modus operandi for humanity, with shockingly little appreciation for the externalities therein involved.


I think you might be missing a big part of what this sort of philosophy is really about.

> Utilitarianism ought to be about maximizing the happiness (total and distribution) of an existing population

For those who accept your claim above, lots of stuff follows, but your claim is a bold assertion that isn't accepted by everyone involved, or even many people involved.

The repugnant conclusion is a thought experiment where one starts with certain stripped-down claims not including yours here and follow it to its logical conclusion. This is worth doing because many people find it plausible that those axioms define a good ethical system, but the fact they require the repugnant conclusion causes people to say "Something in here seems to be wrong or incomplete." People have proposed many alternate axioms, and your take is just one which isn't popular.

I suspect part of the reason yours isn't popular is

- People seek axiological answers from their ethical systems, so they wish to be able to answer "Are these two unlike worlds better?" -- even if they aren't asking "What action should I take?" Many people want to know "What is better?" so they explore questions of what are better, period, and something they want is to always to have such questions be answerable. Some folks have explored a concept along the lines of yours, where sometimes there just isn't a comparison available, but giving up on being able to compare every pair isn't popular.

- We actually make decisions or imagine the ability to make future real decisions that result in there being more or fewer persons. Is it right to have kids? Is it right to subsidize childbearing? Is it right to attempt to make a ton of virtual persons?

> The happiness of a unconceived possible human is null (not the same as zero!)

Okay, if you say "Total utilitarianism (and all similar things) are wrong", then of course you don't reach the repugnant conclusion via Parfit's argument. "A, B, C implies D", "Well, not B" is not a very interesting argument here.

Your null posing also doesn't really answer how we _should_ handle questions of what to do that result in persons being created or destroyed.

> We really shouldn’t get ourselves tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content. That’s not the end goal of any reasonable, practical, or sane system of ethics.

Okay, what is the end goal? If you'll enlighten us, then we can all know.

Until then, folks are going to keep trying to figure it out. Parfit explored a system that many people might have thought sounded good on its premises, but proved it led to the repugnant conclusion. The normal reaction is, "Okay, that wasn't the right recipe. Let's keep looking. I want to find a better recipe so I know what to do in hard, real cases." Since such folks rejected the ethical system because it led to the repugnant conclusion, they could be less confident in its prescriptions in more practical situations -- they know that the premises of the system don't reflect what they want to adopt as their ethical system.


>The Repugnant Conclusion is one of those silly problems in philosophy that don’t make much sense outside of academics.

Not even for academics. It's something for "rational"-bros.


(Real, academic philosophers actually care about the case, too.)


Only because the practice (in the US mostly) has been watered down a lot to include all kinds of rational-bros in the tradition of "analytical philosophy", usually also involved in the same circles and arguments with the wide rational-bro community.

Then again the opposite side has also devolved into a parody of 20th century contintenal philosophical concerns with no saving grace.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: