Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> They remind me of the "Effective Altruism" crowd who get completely wound up in these hypothetical logical thought exercises and end up coming to insane conclusions that they feel trapped in because they got there using pure logic. Not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value.

Do you have examples of that? I have a different perception, most of the EAs I've met are very grounded and sharp.

For example the most recent issue of their newsletter: https://us8.campaign-archive.com/?e=7023019c13&u=52b028e7f79...

I'm not sure where there are any “hypothetical logical thought exercises” that “end up coming to insane conclusions” in there.

For the first part where you say “not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value” this is quite the opposite of my experience with them. They are very receptive to criticism and reconsider their point of view in reaction to that.

They are generally well-aware of the limits of data-driven initiatives and the dangers of indulging into purely abstract thinking that can lead to conclusions that indeed don't make sense.



The confluence of Bay Area rationalism and academic philosophy means a lot of other EA space is given to discussing hypotheticals in longwinded forum posts, blogs and papers. Some of those are well-trod utilitarian debates, others take it towards uniquely EA arguments like asserting that given that there could be as many as 10^31 future humans, essentially anything which claims to reduce existential risk - no matter how implausible the mechanism - has higher expected value than doing stuff that would certainly save human lives. An apparently completely unironic forum argument asked their fellow EAs to consider the possibility that given various heroic assumptions, the sum total of the suffering caused to mosquitos by anti-malaria nets might in fact be larger than the suffering caused by malaria they prevent. Obviously not a view shared by EAs who donate to antimalaria charities, but absolutely characteristic of the sort of knots EAs like to tie themselves in - it even has its own jokey jargon ('the rebugnant conclusion' and 'taking the train to crazy town') to describe adjacent arguments and the impulse to pursue them.

The newsletter is of course far more to the point than that, but even then you'll notice half of it is devoted to understanding the emotional state and intentions of LLMs...

It is of course entirely possible to identify as an "Effective Altruist" whilst making above-average donations to charities with rigorous efficacy metrics and otherwise being completely normal, but that's not the centre of EA debate or culture....


> that's not the centre of EA debate or culture....

EAs gave $1,886,513,058 through GiveWell[1], and there is 0 AI stuff in there (you can search in the linked Airtable spreadsheet).

There is also a whole movement for doing a lifetime commitment to give 10% of your earnings to charity. 9,880 people took the pledge so far[2].

[1] https://airtable.com/appGuFtOIb1eodoBu/shr1EzngorAlEzziP/tbl...

[2] https://www.givingwhatwecan.org/pledge


Sure, but I'd say that the philosophical musings of EA's leadership, events, university outreach centres, and official forum were more representative of "the centre of EA debate or culture" than pledge signatories, even though I've got wayyy more time for stuff like the Giving Pledge.

GiveWell continues to plow its own rigorous international development-focused furrow, but it's cofounder, once noted for calling out everyone else for lack of rigour in their evidence base, has moved on to fluffy essays about how this is probably "the most important century" because it's either AI armageddon or maybe his wife's $61B startup will save us all...


As Adam Becker shows in his book, EAs started out being reasonable "give to charity as much as you can, and research which charities do the most good" but have gotten into absurdities like "it is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or not".


It's also not a very big leap from "My purpose is to do whatever is the greatest good" to "It doesn't matter if I hurt people as long as the overall net result is good (by some arbitrary standard)"


99% of effective altruists and rationalists agree that you shouldn't hurt people as part of some complicated scheme to do good. For example, here is eliezer yudkowsky in 2008 saying exactly that, and explaining why it's true: https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t...


I believe they believe that, on its face.

I also believe that idealistic people will go to great lengths to convince themselves that their desired outcome is, in fact, the moral one. It starts by saying things like, "Well, what is harm, actually..." and then constructing a definition that supports the conclusions they've already arrived at.

I'm quite sure Sam Bankman-Fried did not believe he was harming anybody when he lost/stole/defrauded his investors and depositors' money.


Like an very old dude once said: "No one is willfully evil."


This isn’t a hypothetical leap either. This thinking directly lead to the murders committed by the zizians.


I think this is the key comment so far.


It seems very odd to criticize the group that most reliably and effectively funds global health and malaria prevention for this.

What is your alternative? What's your framework that makes you contribute to malaria prevention more or more effectively than EAs do? Or is the claim instead that people should shut down conversation within EA that strays from the EA mode?


The simple answer is you don't need a "framework" -- plain empathy for the less fortunate is good enough. But if the EA's actually want to do something about malaria (although the Gates Foundation does much, much more in that regard than the Centre for Effective Altruism), more power to them. But as Becker notes from his visits to the Centre, things like malaria and malnutrition are not the primary focus of the centre.


EA people gave a total of $817,276,989 to malaria initiatives through GiveWell[1][2].

How much more do they need to give before you will change your mind about whether “EA's actually want to do something about malaria”?

[1] https://www.givewell.org/all-grants-fund

[2] https://airtable.com/appGuFtOIb1eodoBu/shr1EzngorAlEzziP/tbl...


I've used GiveWell for donations and don't consider myself an Effective Altruist. Does GiveWell get to count for the just the EA community?


By analogy, if a Catholic Church created a charity for curing malaria, and I donated money to it, that wouldn't make me Catholic. But still the existence of the charity, especially if people donated over a billion dollars to it, would be a credible argument against people saying "Catholics do nothing about curing malaria". Does that make sense?


I wish it were, but it's clearly not enough. There are plenty of people with healthy emotional empathy in the world, and yet children still die of easily preventable diseases.

I am plenty happy to simp for the Gates foundation, but I think it's important to acknowledge that becoming Bill Gates to support charity is not a strategy the average person can replicate. The question for me is how do I live my life to support the causes I care about, not who lives more impressive lives than me.


I think the group that most reliably and effectively funds global health -- at least in terms of total $ -- would be the United Nations, or perhaps the Catholic Church, or otherwise one national government or another.

If you exclude "nations" then it does look to be the Church: "The Church operates more than 140,000 schools, 10,000 orphanages, 5,000 hospitals and some 16,000 other health clinics". Caritas, the relevant charitable umbrella organization, gives $2-4b per year on its own, and that's not including the many, many operations run by religious orders not under that umbrella, or by the hundreds of thousands of parishes around the world (most of which operate charitable operations of their own).

And yet, rationalists are totally happy criticizing the Catholic Church -- not that I'm complaining, but it seems a bit hypocritical.


I appreciate the good these organizations do, but I don't think that's the right measure of it. A person wouldn't in expectation serve global health better by becoming Catholic than by joining EA. That Catholicism is large isn't the same as them being effective at solving malaria. EA is tiny relative to the Church but still manages to support within an order of magnitude the funding you mentioned here, with exact numbers depending on how you count.

Similarly, it's not like government funding is an overlooked part of EA. Working on government and government aid programs is something EA talks about, high leverage areas like policy especially. If there's a more standard government role that an individual can take that has better outcomes than what EAs do, that would be an important update and I'd be interested in hearing it. But the criticism that EA is just not large enough is hard to action on, and more of a work in progress than a moral failing.


Rationalists and EAs spend far more time praising the Catholic Church and other religious groups than criticizing them - since they spend essentially no time criticizing them, and do occasionally praise them.


How do they escape the reality that the Earth will one day be destroyed, and that it's almost certainly impossible to ever colonize another planetary system? Just suicide out?


If you value maximizing the number of human lives that are lived, then even “almost certainly impossible” is enough to justify focusing a huge amount of effort on that. Maybe interstellar colonization is a one in a million shot, but it would multiply the number of human lives by billions or trillions or more.


Is the argument that we should try to do things that will benefit our theoretical and theoretically multitudinous descendants? Or is it that just taking action to make their existence more likely is a moral good? Because the latter is just brain dead.


Good question. I think it has to be the latter, given the immense time involved. You can make a connection between driving progress in certain areas today and increasing the odds that humanity eventually colonizes the stars. I don’t think you can make any connection with how well off those far-future humans will be.


If that's what's meant, it's a hilarious perversion of utilitarianism.



One example is Newcomb's problem. It presupposes a ridiculous scenario where a godlike being acts irrationally and then people try to base their life around "winning" the game that will never ever happen to them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: