Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Utilitarianism ought to be about maximizing the happiness (total and distribution) of an existing population.

That's a somewhat-similar alternative to utilitarianism. Which has its own kind of repugnant conclusions, in part as a result of the same flawed premises: that utililty experienced by different people is a quantity with common objective units that can meaningfully summed, and given that, morality is defined by maximizing that sum across some universe of analysis. It differs from by-the-book utilitarianism in changing the universe of analysis, which changes the precise problems the flawed premises produce, but doesn't really solve anything fundamentally.

> Compare to Rawls’s Original Position, which uses an unborn person to make the hypothetical work but is ultimately about optimizing for happiness in an existing population.

No, its not; the Original Position neither deals with a fixed existing population nor is about optimizing for happiness in the summed-utility sense. Its more about optimizing the risk adjusted distribution of the opportunity for happiness.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: