Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you might be missing a big part of what this sort of philosophy is really about.

> Utilitarianism ought to be about maximizing the happiness (total and distribution) of an existing population

For those who accept your claim above, lots of stuff follows, but your claim is a bold assertion that isn't accepted by everyone involved, or even many people involved.

The repugnant conclusion is a thought experiment where one starts with certain stripped-down claims not including yours here and follow it to its logical conclusion. This is worth doing because many people find it plausible that those axioms define a good ethical system, but the fact they require the repugnant conclusion causes people to say "Something in here seems to be wrong or incomplete." People have proposed many alternate axioms, and your take is just one which isn't popular.

I suspect part of the reason yours isn't popular is

- People seek axiological answers from their ethical systems, so they wish to be able to answer "Are these two unlike worlds better?" -- even if they aren't asking "What action should I take?" Many people want to know "What is better?" so they explore questions of what are better, period, and something they want is to always to have such questions be answerable. Some folks have explored a concept along the lines of yours, where sometimes there just isn't a comparison available, but giving up on being able to compare every pair isn't popular.

- We actually make decisions or imagine the ability to make future real decisions that result in there being more or fewer persons. Is it right to have kids? Is it right to subsidize childbearing? Is it right to attempt to make a ton of virtual persons?

> The happiness of a unconceived possible human is null (not the same as zero!)

Okay, if you say "Total utilitarianism (and all similar things) are wrong", then of course you don't reach the repugnant conclusion via Parfit's argument. "A, B, C implies D", "Well, not B" is not a very interesting argument here.

Your null posing also doesn't really answer how we _should_ handle questions of what to do that result in persons being created or destroyed.

> We really shouldn’t get ourselves tied into knots about the possibility of pumping out trillions of humans because an algorithm says they’ll be marginally net content. That’s not the end goal of any reasonable, practical, or sane system of ethics.

Okay, what is the end goal? If you'll enlighten us, then we can all know.

Until then, folks are going to keep trying to figure it out. Parfit explored a system that many people might have thought sounded good on its premises, but proved it led to the repugnant conclusion. The normal reaction is, "Okay, that wasn't the right recipe. Let's keep looking. I want to find a better recipe so I know what to do in hard, real cases." Since such folks rejected the ethical system because it led to the repugnant conclusion, they could be less confident in its prescriptions in more practical situations -- they know that the premises of the system don't reflect what they want to adopt as their ethical system.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: