Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Effective Altruism is just a modern iteration of a thing that's been around for a very long time.

Which you think is what, exactly? I'm under the impression that thing is warmed-over utilitarianism.

> The fundamental idea is sound.

I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things. However the framework appeals very strongly to "rationalist" type people.



It's not utilitarianism, it's a scam.

It's a group of people persuading themselves they're special and entitled, because of course they are, and then trying to sell that line - financially, psychologically, sometimes politically - to themselves and others.

Which is not a new thing in any way at all. The wrapping changes, the psychological games don't.

I have a rule of thumb which is that if you want to understand a movement, organisation, team, social or personal relationship, or any other grouping, the messaging and the stated purpose are largely irrelevant. The real tell is the quality of the relationships - internally, and with outsiders.

If there's a lot of entitlement and rhetorical myth-making + grandiosity happening expect some serious dysfunction, and very likely non-congruence and hypocrisy.


>I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things.

Right, but that doesn't mean that we shouldn't care about consequences at all. There's a pretty big gap between "given that we have scarce resources, we should maximize the impact of our use of it" and "committing this atrocity is fine because the utility calculations work out".


The reason EA seems like it's a form of utilitarianism is of course the association with the so-called rationalist community. As you note, it's very appealing to that type of person. This is partly because the math used to rigorously compare consequences seems easy, and partly because utilitarianism has a lot of good places to hide your subjective value judgements.

You can apply EA-like concepts with any sort of consequentialist ethics. E.g. the Rawlsan veil of ignorance can work -- would hypothetical-me rather reduce his chance of dying of malaria by X%, or reduce his chance of malnutrition by Y%? It's just harder to explain why you rank one course of action over another, and therefore you're probably not going to be able to centralize the decision making.

This isn't because it's somehow unsound[0]. It's because it's harder (though not impossible) to explain with math, and the subjective value judgements are right in your face rather than hidden in concepts like utility functions.

[0]- It might be; you might not accept the premise of the veil of ignorance. That's not the reason it seems trickier than the utilitarian version, which has the same problem.


The "10% of lifetime income to charity" pledge is pretty close to Christian tithing, Islamic zakat, and suchlike. Who also claim to be spending donations to help the poorest people in society, and with low waste.

Of course, EA has a bunch of other weird stuff like AI safety, which isn't an idea that's been around for millennia.


Well, actually, on AI safety: https://en.m.wikipedia.org/wiki/Golem


When you make that juxtaposition, the idea that you must obey ridiculous rules in order to placate an invisible omnipotent being does seem to have religious analogs.


> I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things.

I think most moral philosophers agree with you on that.

IMO, the way people talk about Utilitarianism feels to me exactly the same as my own feelings when at school, having only been taught 2D vectors and basic trig functions, I spent 6 months trying to figure out what it even meant to render in 3D — in particular the sense of pride when I arrived at two useful frameworks (which I later learned were unshaded 3D projection and ray-marching).

By analogy: while that's a good start, there's a long way from that to even Doom/Marathon let alone modern rendering; similarly while Utilitarianism is a good start, it's only saying "good and bad can be quantified and added" and falls over very quickly when you involve even moderately sized groups of spherical-cow-in-a-vacuum people with finite capacity for experiencing utils. It also very definitely can't tell you what you should value, because of the is/ought divide.

Once your model for people is complex enough to admit that some of them are into BDSM, I don't think the original model of simple pluses and minuses can even ascribe a particular util level any more.


Utility can't be one dimensional and it probably isn't linear.

In other words, we would have to treat most situations as unique and then try to find patterns and then the whole rationalism thing goes out of the window.


Only spherical-rationalism-in-a-vacuum goes out the window.

Unfortunately, and this goes back to my previous point, lots of people (I'm not immune!) mistake taking the first baby step for climbing the whole mountain.

I'm reminded of the joke about a an engineer, a physicist, and a theoretical mathematician who each wake up to a fire in their bedrooms: https://old.reddit.com/r/MathJokes/comments/j8bax6/an_engine...


I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things. However the framework appeals very strongly to "rationalist" type people.

If it sounds horrible, then it's probably is?

The logical chain of hurting a human leading to helping two human doesn't sound like something that is moral or dependable.

Giving to charities that focus on the most severe and urgent problem of humanity is a very straightforward idea.

However, not all charities are focused on the most urgent problems. For example, a local charity I frequent does improv and comedy theater, hardly 'urgent' needs. People don't like to hear is that they could donate money to a third world NGO providing vaccines, or fighting corruption in third world countries instead of their local church or community theater.

Don't get me wrong, community theaters/churches/etc are good things. They just aren't saving lives.


Every ethical theory is a mess of contradictions but throwing ethics out entirely isn’t the right answer. As for hurting someone to help others, sometimes someone needs to kill someone like Hitler for the greater good.


> I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things.

This was the point I was making, yes. The idea itself isn't logically faulty, but it is easy to subvert.


There was a study a while ago that said people who accept utilitarianism are more likely to have psychopathic traits.


Based. Utilitarianism is cringe, these guys need to read Kant or something.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: