Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Back in my day effective altruism was mainly about finding charities that aren't essentially scams (way harder than it looks). Scene has apparently moved on to other things since I followed it a decade or so ago.


This is a super-simplified summary, but I think it's generally accurate.

The basic thesis of EA is "it is your duty to improve/save as many human lives as you can."

At some point, a lot of EAs realized that there were many more future humans than present humans.

Once you widen your scope in that way, you start realizing that long-term, catastrophic risks (climate change, nuclear disaster, misaligned AI) would affect a lot more human lives -- billions or trillions more -- than basically anything we could do today.

So the logic becomes -- why would I spend time/money on mosquito nets when we need to be securing the literal future of the human race?


The expansion of EA to from eliminating malaria to interplanetary exploration was a pure grift. Once EA organizations started to pull in serious money and started to purchase palatial estates in Oxford[1] you had to know the jig was up.

[1] https://en.wikipedia.org/wiki/Wytham_Abbey#/media/File:Wytha...


Because we do that with mosquito nets.

EA seems like a way to achieve nothing while looking like you're doing everything. No one expects you to fly to Mars tomorrow. And that's true every single day. It's true today. It'll be true tomorrow. It was true yesterday. It was true 10 years ago. It will be true 10 years from now.

So if no one really expects you to fully achieve your goal, all you have to do is kinda look like you're trying and that will be good enough for most people.

EA takes a good, hard look at all these good intentions and says, "Fuck, this would make a baller ass road".

However, if we solve malaria. That's another thing not killing us. Another problem checked off. Like polio. Or smallpox. Colonize Mars? Fucking how? We can't even get the environment on Earth under control. How the living fuck are we going to create an environment on another fucking planet. Much less even get there.

So how about we figure out a way to get the garbage out the ocean. On how to scrub the air of CO2. How to manufacture and produce without so many polluting side effects. We keep doing all these smaller things. Put in the work, and one day, we will save all those trillions of potential lives. But it requires putting in the work.

Edit: Not saying you believe it. But presenting the counter-argument to EA.


Where does this view of EA come from? Hatchet jobs written by TIMES and its ilk? Twitter personalities like Elon Musk who have so much social gravity that people perceive them as the spokesmen of anything they mention that you hadn't heard of before?

Google "effective altruism" and the first two results are EA/Giving What We Can and GiveWell. Both of these organizations are meta-charities that help forward money or encourage the forwarding of money to other charities, but most of all... Mosquito nets! The first charitable fund mentioned by EA/Giving What We Can is GiveWell's, and the top recipient of that fund is the Malaria Consortium.

I heartily encourage you to read about GiveWell. It's still the heart of EA from the perspective of the less-vocal majority of self-described EAs.


I think “where does this view come from?!” outrage comes off as disingenuous. I think we both know that over the past couple of years the most prominent public “face” of the EA community has been William MacAskill, who went on a major donor-funded press tour to promote his ideas on longtermism through his book “What We Owe the Future.” For most of the general public, this was probably their first encounter with the entire concept of EA.

It is perfectly fine if you don’t support MacAskill’s vision for EA’s future. I would love to hear a critique of this schism from someone within the EA community! But when you imply that critics are getting their (accurate) impression of EA from “newspaper hatchet jobs”, it feels like you’re either unaware of the way some prominent EAs are presenting the movement, or else you’re not arguing in good faith.


So you feel as though William MacAskill has been the "public face" of EA for the past couple years. That's possible, though it would make a little more sense if you had said one quarter of that time period, since his book was released in August.

I'd normally not want to get into personal accusations, but since you've already started your reply with one sentence ending in "disingenuous" and the next starting with "I think we both know" (which is infuriating), and to round it all out ended your comment with "...or else you're not arguing in good faith", I'll say it: I think you're projecting your personal Internet experience on others, and I think your personal Internet experience does not reflect that of the median person. MacAskill is not the face of EA. I think if you look at search data, you'll find that Peter Singer's popularity merely went from being ~100x MacAskill's to more like 10x during the book tour.

EA predates the notions in What We Owe the Future by many years. Present-focused charities like GiveWell were perhaps overshadowed in popularity by that book for a news cycle or two in late 2022. It happens. But the notion that that book or its author have been in any way the "most prominent" aspect of EA for the last couple of years is completely false. It's projection. In your mind, that's all EA is lately so it must be all it has ever been (hence the exaggerated timeline) and everybody else is just like you.


Here is what you said to the other poster: “Where does this view of EA come from? Hatchet jobs written by TIMES and its ilk? Twitter personalities like Elon Musk who have so much social gravity that people perceive them as the spokesmen of anything they mention that you hadn't heard of before?”

Having re-read this, it just strikes me as extremely disingenuous and uncharitable (not to mention aggressive) particularly since you seem to know that there has been a huge amount of press around EA recently due to the MacAskill longtermism book, not to mention all the press around SBF and his longtermist fund.


No! Gah! You're doing it again! I'm not being aggressive, I'm being harassed by somebody who keeps telling me what I "know"! Why do you insist on talking to me like this instead of just taking me at my word?

> ...you seem to know that there has been a huge amount of press around EA recently due to the MacAskill longtermism book...

I do not know this! I think I saw one article about it posted here on HN. I also might have read a post on somebody's substack about it a few months ago. I am not aware of any "huge amount of press", certainly not "recently". I looked into the search stats on it because of your comment. I didn't even remember the name MacAskill or much about his book before you brought it up. EA to me is still basically just malaria nets and other present-focused causes, and I can only assume that's what it is to most of the many, many people who have read Singer and not MacAskill and who donate to GiveWell year after year.


> Where does this view of EA come from?

I''ll answer. I think the view comes from a different, but related group known as "The rationalists".

"The rationalists", or less wrongers, fit the bill to a Tee, of all these common criticisms of EA that people are bringing up.

And, the reason why this criticism may be misattribute to EA specifically, is because there is a large overlap between the rationalists, and EA.

The Rationalists are the ones talking about AI existential risk, and colonizing mars, and all that nonsense.


Although the thought leaders are focused on long termism most EA money still flows through givewell to mostly global health initiatives.


I find it odd that longtermists dont see that the obvious solution to long term issues is throwing more brains at the problems which implies doing good now.

Maybe i am missing part of the argument?


I'm assuming you mean that we should be trying to bring as many people as possible out of poverty and get them good educations (in which case, I wholeheartedly agree).

When you've built a lot of your identity on the idea that you are one of the smartest people in the room, it can be very hard to accept proposals that would challenge that. This would do so in 2 ways:

1) "Creating" more smart people means more competition for them—possibly even people who would end up being smarter than they are.

2) For a lot of them (at least from what I've come to understand), part of the "proof" that they are very intelligent is that they are very wealthy. If you start pushing the idea that intelligence doesn't automatically show itself and lead to wealth too hard, it's not very many steps from there to disproving the idea that wealth implies intelligence...and then how can they be so sure they're that intelligent after all?


> in which case, I wholeheartedly agree

That is exactly what I had in mind as I was writing it.

As for the rest, I haven't met such people as my circle is mostly academic and Scandinavian, however, evidence supports it. Evidence being how certain very prominent figures seem to enjoy the company of sycophants and yes-people.

It's a shame really.


The existential risk stuff was baked into the Effective Altruism movement from the beginning, founder William MacAskill was a student of co-founder Toby Orb who in turn was a student of Nick Bostrom, who established x-risk research as a field in academia. Orb and Bostrom both now work at Future of Humanity institute, an institution dedicated to the species' long term future, and it's mostly concerned with x-risk research. Both Ord and Bostrom are frequently cited in EA writings, with book clubs being organized around Ord's doomsday warning for popular reading The Precipice. Bostrom and Eliezer Yudkowsky knew each other from their early transhumanism roots, and some of the early EA community organizing was done through his LessWrong forum. That and the establishment and funding of both EA and Rationalist orgs in Berkeley by philanthropists associated with both causes, primarily Skype co-founder Jaan Tallinn, seems to explain the overlap between those communities (fun fact: the Effective Altruism online forum runs the same custom software as LessWrong). And of course Eliezer has dedicated his career to funding AI x-risk institutions.


> basic thesis of EA is "it is your duty to improve/save as many human lives as you can."

No, it's not about having any duty. (Where did you get that from? Friendly question)

Instead: If you want to help others, then, you can to stop and think, do some research (or read others') before deciding where to spend your time and money.

Something like that.

Still, a movement formed around a concept that involves money (donations, lots of them) and status (getting appreciation for helping others), is going to attract some of the wrong people. So maybe it's unsurprising that from time to time, we're reading negative things about movements formed around EA-the-concept. Although the concept itself is neither good or bad (well except for in some people's opinions).


I forget where I heard it, but one of the issues brought up was that EA never really formalized the 'value' of a life relative to the 'value' of other stuff.

Like, there exists some value to the entirety of the Amazon that is higher than that of a human life. Otherwise the terrible logic says that you should devastate the Amazon to just build slum housing and take away birth controls. I'm not arguing for any of this, just stating the premises.

I think we can all agree that the 'value' of the whole Amazon isn't worth bulldozing for slum housing.

So the problem is where you put these fuzzy lines. You've got some extremes that 99.9% of people agree on, where is the middle? Where do you put down a line?

From the VERY little I've read of the EA debates, there seems to be no real work on this? If someone else could synthesize this as a reply, I'd be quite grateful.


At some point you've just got to decide for yourself. A lot of focus on human lives and especially qualys because they can be thought of as interchangeable but that hasn't stopped some EAs from focusing on animal welfare and other non humanist causes. There's no objective way to value rainforest or animal suffering in terms of qualys, all you can do is thought experiments and read studies on effectiveness of charities focusing on each so that you can decide which is more effective given your ethical framework.


I guess GiveWell still does that, but I'm not really sure what everyone else in the movement does.


Dustin Moskovitz's Good Ventures also seems to focus on more "prosaic" issues like medical research/direct cash transfers/animal welfare.


Givewell is certainly my go to.


Looks good I'll check it out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: