Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Effective altruism has a sexual harassment problem, women say (time.com)
242 points by s17n on Feb 3, 2023 | hide | past | favorite | 392 comments


The worst thing about being smart is how easy it is to talk yourself into believing just about anything. After all, you make really good arguments.

EA appeals to exactly that kind of really-smart-person who is perfectly capable of convincing themselves that they're always right about everything. And from there, you can justify all kinds of terrible things.

Once that happens, it can easily spiral out from there. People who know perfectly well they're misbehaving will claim that they aren't, using the same arguments. It won't hold water, but now we're swamped, and the entire thing crumbles.

I'd love to believe in effective altruism. I already know that my money is more effective in the hands of a food bank than giving people food myself. I'd love to think that could scale. It would be great to have smarter, better-informed people vetting things. But I don't have any reason to trust them -- in part because I know too many of the type of people who get involved and aren't trustworthy.


> EA appeals to exactly that kind of really-smart-person who is perfectly capable of convincing themselves that they're always right about everything. And from there, you can justify all kinds of terrible things.

I came to the same conclusion after a group of my friends got involved with the local rationalist and EA community, though for a different reason: Their drug habits.

They believed themselves to have a better grasp on human nature and behavior than the average person, and therefore believed they were better at controlling themselves. They also had a deep contrarian bias, which turned into a belief that drugs weren’t actually as bad as the system wanted us to believe.

Combine these two factors and they convinced themselves that they could harness recreational opioid use to improve their lives, but avoid the negative consequences that “normies” suffered by doing it wrong. I remember being at a party where several of them were explaining that they were on opioids right now and tried to use the fact that nothing terrible was happening as proof that they were performing rational drug use.

Long story short, the realities of recreational opioid use caught up with them and they were blind to the warning signs due to their hubris. I intentionally drifted away from that group around that time, so I don’t know what happened to them.

I will never forget how confident they were that addiction is something that only happens to other people, not rationalists like them.


I'm reminded of a fascinating series of Reddit threads, starting back in 2009, from somebody who convinced himself he "could handle anything once" and decided to try heroin, only to rapidly spiral out of control:

https://www.reddit.com/r/BestofRedditorUpdates/comments/wef6...


I'm only on the second update, and it seems like this guy speed-ran addiction:

> I can't stop crying. Fuck heroin. Fuck my life. I guess I don't need to say that since heroin pretty much fucked my life for me in under two weeks, I just want to die.


The issue with Reddit is that the story is as likely as not to be fake. Particularly, here, I don't think people are at risk of serious withdrawal after only two weeks of heroin use.

Though heroin use is obviously one of the dumbest things anyone can do.


As someone who has used opiates for well over a decade (for medical reasons), heroin and stronger synthetic opioids can absolutely cause an opioid naive individual to have withdrawal symptoms after even a week.

This is why codeine OTC recommends no more than three consecutive days of use.


It depends on if you have a specific antibody that targets specific opiate compounds. For example, oxycodone was very dysphoric and caused me even more pain, whereas hydrocodone caused me the intended therapeutic effect of euphoria and analgesia


Hydrocodone has to be one of the best drugs known to man. I had a very small prescription for my wisdom teeth and I instantly understood the opioid dependency problem.


It's all about your body chemistry. I'm a redhead; red hair is a result of a mutation in the preopiomelanocortin (POMC) system, so opioids work funny on us. I've had hydrocodone/acetaminophen(paracetamol) pills, and they felt like... taking acetaminophen. No euphoria, not even increased pain relief. I quit after one dose, because what was the point in making myself constipated?


I dunno man. My mom and dad were both smack heads. They both went through recovery but had substantial relapses. My mom was as ginger as they come but really couldn't get off the horse. Dad found solace in terminal alcoholism. So.. I dunno. Thoughts?


Booze doesn't work that way, and there are a lot of mutations that can cause gingerism - but the one I have, opioids don't do much of anything.

Never had the hard stuff and not inclined to try. But the oral ones might as well be cornstarch.


That’s true, but the withdrawal after a week or two isnt going to be the “i just want to die” type.

Severe withdrawal comes after dose escaation and youre not going to develop that much tolerance after 2 weeks.

Im not saying it wouldnt be unpleasant, but its not goibg to be the Trainspotting version.


Wait, you've got over-the-counter opiates in the US (I assume it's the US)? What?


Speaking for the US, yes indeed. Loperamide is technically an opioid and available OTC, though that's rarely a sought after compound unless one is in dire straits.

Codeine is available with a pharmacist's approval alone in my state. So while it may not technically OTC by that definition (or is, I don't know how OTC is defined), it can be had without a prescription by visiting a participating pharmacy and simply requesting codeine+guaifenesin. Done it a few times myself.


Codeine + cough syrup is so popular in Texas that it spawned an entire subculture dedicated to its use. See DJ Screw in Houston (https://en.m.wikipedia.org/wiki/DJ_Screw).

Lest you think this is no longer the case - “lean”, as it is known by Houstonites (due to your tendency to lean on things while under the influence) was a major plot point of a recent mostly autobiographical Netflix series centered around Houston that was released late last year: https://www.keranews.org/arts-culture/2022-08-25/houston-ooz...


Vice released an excellent short piece on Houstonian screw & slab culture years back. Might still be on youtube! It was an excellent piece at the time, but I haven't seen it in quite a while.


For clarity, loperamide, sold under the brand name Imodium, is an anti diarrhea medication. It is an opioid; however, it does not readily cross the blood-brain barrier. It only acts on the opioid receptors in the large intestine and slows the movement of food. In very high concentrations it may get into the brain.


If I go to a developing country where I cannot guarantee access to a bathroom, I pop one or two of these every morning to keep me regular.


"A pharmacist's approval" is a prescription, just one that happens to be written by the same person who fills it. So no, not OTC, which specifically refers to medications you can get without a prescription.


I suspect this is location specific though. In the UK my migraine medication was available over the counter, I still had to have a discussion with the pharmacist every time I bought it. Conversely when ever I've needed an actual prescription medication, in an emergency or whatever, that has always had to go through my doctor to get the prescription sorted.


/me also a brit.

My go-to pain-relief used to be a product called Codis, which was soluble aspirin+codeine. It used to be available OTC, until about 8 years ago, then it disappeared. You can still buy paracetamol+codeine OTC ("Paracodol?). My understanding is that aspirin causes vasodilation, which causes more rapid absorption of codeine. Supposedly that's why Codis worked better than Paracodol.

I don't know why they took it off the shelves. There was no announcement; it just disappeared.

Codis was effective for both migraines and period pains, as reported to me (I'm not a woman, and I don't suffer from migraines).


This was migraleve it didn't have any painkilling element in the first tablet (it was 2 parts) not much point anyway as by that point I was throwing up.


asprin + codiene sounds like a based painkiller combo. Who needs a liver tbh?


It's paracetamol that's hard on the liver, not aspirin or codeine.


Well, sure, countries will have their own laws over this or anything. Ours are defined at the federal level and thus apply the same everywhere in the country; since GGP and I are both USian, that's the basis on which I replied. (I don't know enough about any other country's such laws to speak to them!)


The UK has codeine OTC... you assume. Everyone on the internet is american. Its funny. ahha


Not that unreasonable a guess. It's very late in Europe right now (currently 4 AM GMT, perhaps 2 AM when that was posted), and you know YC is an American company... right?

https://news.ycombinator.com/item?id=30210378


Why is the ownership structure of YC important? You know how the internet works right?

For better or worse, 'american' is the defacto standard on the internet, but please remember that only a small percentage of the global population is USian the in all probability the person you're speaking to isn't from the US.


The English (no pun intended) speaking population is more likely to be from the US than the global population.

Disclaimer: Not English. Not American.


I would say that the us accounts for less than half of English speaking netizens.

UK, Australia, new zealand Canada already add up to a sizeable chunk of the US's population. Then add in 1% of India, or China (I assume some proportion has access outside the great firewall) and you've got a population bigger than the US.


If you add up the numbers of the countries you mentioned (together with 1% of India or China), you'll realize that it's smaller than the US population...

US: 331.9 million

UK: 67.33 million AU: 25.69 million CA: 38.25 million NZ: 5.123 million IN: 1.408 billion * 0.01 = 14m


Well the 1% was pulled out of my arse.

Point is some very small percentage of the rest of the world is going to dwarf the US. Yes the US will be the single biggest demographic but less than 50% of the total.

I haven't been able to find many solid stats, but eg English language Wikipedia, the US has 22k active editors out of 59k total.


I'm surprised at your statement. In response to a thread about how intelligent people erroneously think their intelligence protects them from addiction, and the story of one person getting addicted, your response is "that couldn't happen to us".


It's amazing how you're like a prime example of what's being talked about here and you're totally unaware of that.

I don't mean that to be a dick. I mean that because I care.

Opiates are dangerous. Very dangerous.


>* The issue with Reddit is that the story is as likely as not to be fake.*

Maybe, but if so, it's a fake narrative they've maintained consistently for over a decade:

https://www.reddit.com/user/SpontaneousH/


I had serious withdrawal symptoms 24 hours after medicating myself with controlled doses of hydromorphone after surgery. It’s always been remarkable to me how many people confidently assert I wasn’t addicted.


Conversely I'm on long term anti depressants and I get withdrawal symptoms if I forget to take them, but no one is ever referred to as being addicted to anti depressants.

I suspect there are different (social) mechanisms at work here, but addiction is generally applied to one specific form of drug reliance.


> but no one is ever referred to as being addicted to anti depressants

People who are sceptical of antidepressants, such as Professor Joanna Moncrieff (University of College London), do speak of them as “addictive”, and consider their use to (at least sometimes) be a case of “prescribed drug dependency”, and also speak of “withdrawal” from them


I had very noticeable withdrawal after 3 weeks of using opiates as prescribed for a ruptured disc. Stopped because I was pain free a few days after surgery to fix it, and felt like I was going to die.


After a motorcycle accident I spent a week on hydromorphone, a month on oxycodone, and occasional fentanyl. I have some oxycodone left over at home and, if I'm being honest, I don't feel the slightest desire to do it. I haven't for the 9 months so far I've had it and I doubt I ever will. I live in SF as well so fentanyl is easy to come by, should I so desire it.

Reading the literature, it appears that this is the experience for 9/10 people prescribed the drug. The problem, of course, is that 10% is a large number at scale.

Obviously, I've had no experience with heroin. But considering my experience with fentanyl, I can quite easily say that most of us are not susceptible.

This strikes me as yet another example of overstating risks. PTSD is the canonical case, of course, where the majority of people experiencing trauma won't get PTSD and even a third of people experiencing major trauma won't get PTSD.

The short version is that the average human is very resilient.


This thread, and the linked AMAs, are amazing. Thanks for posting.

(Incidentally, I don't really care if they are 100% true or near-true or simply entirely plausible.)


Great story, I believe


I can't imagine what benefits they were looking for in opioid use/abuse. Like, most drugs have a very distinct use, ranging from "directly beneficial" like using stimulants to increase productivity, to "socially useful" like using alcohol or MDMA as a social lubricant. Opiates don't really fall into any of those boxes. They're not mechanistically useful: If you want euphoria, there are better drugs like MDMA or ketamine. If you want to prematurely end your stimulant rush, booze, GHB, or ketamine will do that. Their primary use is as a coping mechanism, because they make you feel generically good regardless of your actual situation. And they're not really socially useful either, because unlike, say, cocaine, the overlap between "valuable people to have in my social network" and "people who shoot heroin" is practically non-existent.

I say this as someone who is very drug positive: Stay away from opiates, except for treating extreme, acute, physical pain.


I mean, just like with a lot of drug abuse, it’s a method of escape. An extremely good way to make you forget about your troubles.

It wouldn’t have been anywhere near as big of a problem as it is if it wasn’t good at that.


> And they're not really socially useful either

A relative of mine-who used to be a heavy marijuana user-told me how he once tried heroin-and he really didn’t like how it made him behave socially-he says it made him smug, arrogant, and an asshole who felt superior to everyone else. I don’t know if that’s some kind of idiosyncratic reaction to it or a common one, but he told me it was enough to convince him to not try it again, and just stick to marijuana and alcohol, which didn’t have the same effect on him


There are obvious benefits for using opiates for your life. You will be less agitated, calmer, kinder and will be able to tolerate unpleasant stuff better. I'm speaking also from experience.

Problem is, it always comes to not wanting normal undrugged life afterwards.


This sounds very familiar. The Netherlands is the only country in the world where alcohol consumption correlates positively with education level, and thus alcoholism and associated ills. It's nearly impossible to find people in these circles that think of alcohol as something to be careful with. Alcoholism only happens to 'others'.


> The Netherlands is the only country in the world where alcohol consumption correlates positively with education level

I'm reasonably sure this isn't true. It is also the case in the UK and the US. It is presumably also the case for many other countries, since it is hard to imagine what would make the UK, the US, and the Netherlands somehow unique in the world.

UK: "Higher educational attainment is associated with increased odds of daily alcohol consumption and problem drinking. The relationship is stronger for females than males. Individuals who achieved high test scores in childhood are at a significantly higher risk of abusing alcohol across all dimensions"

https://rmitlibraryvn.rmit.edu.vn/discovery/fulldisplay?vid=...

US: "Greater alcohol consumption is also associated with having a higher educational level and having a higher income [...] In the college sample, educational achievement is positively associated with drinking"

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4838609/


> UK, the US, and the Netherlands somehow unique in the world.

O actually there's a ton. These are solidly protestant cultures, with a history of hypocrisy.


Middle class alcoholism is viewed very differently to lower class alcoholism. If you have X glasses of wine a night but otherwise have a functional life, that is 'ok'. Getting shitfaced on a Friday night isn't.

Although it's the middle classes that write the rules so make of that what you will.


> alcohol consumption correlates positively with education level

Not too surprising; drinking a lot, frequently, is expensive. High education level correlates with high income. It's cheaper to get wasted on almost any illegal drug than it is with alcohol.


I was briefly pulled into EA through a friend. Unfortunately my experience as a man was also bad enough to turn me off. The TLDR version of this was:

1. Most people were full of themselves and full of s** 2. This was just like everything else, a business. Most people were in it for themselves, making a career out of EA, networking for professional growth, using ea in their own workplace to climb ranks and the worst ones were just people virtue signaling for Instagram. 3. Drug, sex and alcohol was heavily involved 4. At the end of the day, there was a lot of talking and less doing, which was the problem that it is supposed to solve in the first place


You genuinely can't make this shit up. Normally, if someone told me that rationalists were Bayes-ing themselves into justifying their own opioid addictions, I'd say that attempt at satire's a little too on the nose. This is just absolutely incredible.


No, they should have listened to their parents! Drug positivity is supposed to be reserved for cannabis and hallucinogens!


I'll say the EA and rationalist community varies a great deal from location to location. Where I am the vast majority use drugs at a lower rate than the general population. There's of course some outliers that are into psychedelics but it's still a very distinct subgroup. Where I am the majority of EA/rationalists are more public policy and human focused and less concerned about AI/x-risk and way more skeptical of cryptocurrency.


This in particular does not look clever:

"They believed themselves to have a better grasp on human nature and behavior than the average person, and therefore believed they were better at controlling themselves."


>recreational opioid use

Horse? Were they shooting horse?


> I intentionally drifted away from that group around that time, so I don’t know what happened to them

This isn't a convincing story that they were wrong if you don't actually know what happened to them. It just looks like you're showing a bit of that hubris yourself, in assuming they went off a cliff.


OP disassociate with them when issues caught up with them. Even if their fixed their lifes after, this point stands.


What issues? How significant were these issues? The information is so vague that it tells us nothing. I'm skeptical opioids can be used this way, but the OP's post is not evidence of that.


It does not, not if they succeeded and ended up better off than if they haven't started using in the first place.


You genuinely believe that this had potential to happen after the issues related to drugs started to show up? Cause it is very very unlikely.


However unlikely it is, naasking is right that OP's comment contains zero new information.


It does not. They started having drug related issues despite thinking it won't happen.

OP disassociating at that point is OP exercising good judgement. It does not cancel out his observation that issues started.

The "maybe in next few months they gained something super special" is pure speculation - and it is not even that these people took drugs with that expectation. Instead they took drugs expecting no issues will happen to them.


> The "maybe in next few months they gained something super special" is pure speculation

As is the assumption that it all went to shit, that's why this "story" is useless as some cautionary tale. "Started having problems" could mean anything, from "life spiraled out of control" to "their mom found out and bitched at them".


This is their original believe: they convinced themselves that they could harness recreational opioid use to improve their lives, but avoid the negative consequences that “normies” suffered by doing it wrong

This is claim of later: the realities of recreational opioid use caught up with them and they were blind to the warning signs

There is no need for them experiencing the worst of heroine addiction or whatever for them to be wrong and or right. At that point, they has drug related issues so they were wrong.


> This is claim of later: the realities of recreational opioid use caught up with them and they were blind to the warning signs. [...] At that point, they has drug related issues so they were wrong.

Right, so what realities caught up with them exactly? What "warning signs"? How do you know this wasn't just the OP projecting some bullshit that doesn't actually matter?


Effective Altruism is just a modern iteration of a thing that's been around for a very long time. The fundamental idea is sound. However, in practice, it all-too-easily devolves into something really terrible. Especially once people start down the path of thinking the needs of today aren't as important as the needs of a hypothetical future population.

Personally, I started "tithing" when my first business was a success. In part because it's good to help the less fortunate, but also as an ethical stance. Having a business drove home that no business can be successful without the support of the community it starts in, so it's only right to share in the rewards.

So, I give 10% back. I have rules about it:

I always give to a local group who directly helps people and who is typically overlooked for charitable giving. I get to know the group pretty well first.

I never give to any group that won't keep my identity a secret.

I never give to any group that asks me for money.

I don't always give in the form of money. Sometimes, it's in the form of my time and effort, or in material goods, etc.

I don't give to "umbrella" groups whose purpose is fundraising for a collection of other groups. This isn't because I have a problem with them, but because they're not the ones who struggle the most to get donations.


>Especially once people start down the path of thinking the needs of today aren't as important as the needs of a hypothetical future population.

It's not that that bothers me so much as the fact that many effective altruists do it so badly. We need to be concerned with the future. That is the only reason to maintain roads and bridges, to prevent pollution, or to conserve resources like water in aquifers and helium. But effective altruists are as likely to talk about colonizing Mars as they are to talk about global warming.

Effective altruism is supposedly about making evidence-based decisions. We have no idea how likely "existential risks" are. We have no idea what, if anything, can be done about them. We cannot predict a year into the future, let alone millenia. So-called longtermism is nothing more than guesswork.


>It's not that that bothers me so much as the fact that many effective altruists do it so badly. [...] But effective altruists are as likely to talk about colonizing Mars as they are to talk about global warming.

Are they doing it badly, or are you not understanding their arguments? AFAIK effective altruists want to colonize mars on x-risk grounds, which would explain why they want to prioritize that over global warming, even though the latter is happening right now. AFAIK they think that global warming is bad, but isn't an existential risk, whereas colonizing mars will mitigate many existential risks.


I've yet to see an argument for colonizing Mars for this purpose, that wouldn't be a better argument if the goal were instead "build robust, distributed bunkers on earth, and pay families to live in them part-time so there's always someone there".

Cheaper, and more effective.

Most plausible post-apocalyptic Earths would be far easier to live on than Mars.

The remaining threats that wouldn't also be pretty likely to take out Mars at the same time, would be something like a whole-crust-liquifying impact, which we'd have a pretty good chance of spotting well in advance, and we could put some of the savings into getting better at that.

I think a bunch of smart people are also just romantics when it comes to space shit, and that's why they won't shut up about Mars, not because it's actually a good idea.

Hell, building orbital habs is probably a better idea than colonizing Mars, for those purposes, if we must do space shit.


> Most plausible post-apocalyptic Earths would be far easier to live on than Mars.

thank you

how are we supposed to build a second home on a dead, ruthlessly hostile planet until we demonstrate ourselves capable of stabilizing the biosphere and building a sustainable long-term civilization here

Earth is easy mode compared to Mars


Right—living on Mars is like living on Earth if it ambient surface radiation levels were significantly higher, nothing would grow in the soil anywhere without a ton of preparation, and you couldn't leave your house without a pressure suit. And there's no surface water. And the gravity's fucked up. And the temperatures suck for basically anything life-related. And none of the geological and chemical processes that keep our biosphere viable existed, at all.

So... like Earth if several apocalypses happened at once, including a few nigh-impossible ones. Except it starts that way. And it's actually even worse than that. Sure, maybe we could slam some comets into it and do a ton of other sci-fi shit over a few centuries and it'd eventually get better, sorta, a little—but c'mon, seriously?


> how are we supposed to build a second home on a dead, ruthlessly hostile planet until we demonstrate ourselves capable of stabilizing the biosphere and building a sustainable long-term civilization here

Because we can afford to make big mistakes in terraforming a dead, ruthlessly hostile planet.


A few microscopic fungal-like spore things could throw a wrench in that. Now the planet is a nature reserve.


Declaring Mars a “nature reserve” would be completely unenforceable. Suppose you convince US Congress to pass a law banning Americans from sending humans to Mars, due to the risk of contamination to native Martian microbes. What happens when China says “now is our chance to show the world we’ve eclipsed the US by sending humans to Mars when they won’t“? Even though such a Chinese mission isn’t feasible today (an American one arguably isn’t either), who can say what its feasibility will be in another 20 or 50 years? And if not China, then sooner or later somebody else. Sustained global consensus on this issue is unlikely, which makes Mars as a “nature reserve” meaningless in the long-term. On Earth, the vast majority of nature reserves only exist because some government has military control of the territory and hence can enforce that status.


> Declaring Mars a “nature reserve” would be completely unenforceable.

Generally worked pretty well for Antarctica. A few research bases are permitted but colonization is internationally banned and not happening.

BTW, colonizing Antarctica would be a lot easier than colonizing Mars. Far fewer technical challenges to overcome, and much more practical experience overcoming those challenges.


Many people who call for Mars to be declared a "nature reserve" aren't just calling for a ban on Mars colonisation, they are calling on a ban on crewed exploration – either of the planet as a whole, or at least of sites they view as "environmentally sensitive" (which basically turns out to be the most interesting exploration targets, and many of the sites which would most easily host crews). They are worried about microbial contamination, which is a rather different environmental concern from Antarctica, and requires much stricter limits on human activity.

When someone like Elon Musk talks about "colonising" Mars, all he's realistically talking about – at first – is a crewed research station, so not that different from what we have in Antarctica. And many people who want Mars to be a "nature reserve" are opposed to even that. Yes, Musk hopes that such a research station will eventually grow into a buzzing metropolis, but I think if that ever happens it is a long way off. Musk might live to see crewed research stations established, I very much doubt he'll live to see genuine colonisation, much as he enjoys publicly fantasising about that topic.

Even the ban on colonising Antarctica only really works because it is banning something no government wants to do anyway. Crewed exploration of Mars would be attractive in principle to governments because of the benefits for national prestige, getting in the history-books, outshining the competition – the same basic reasons why the US went to the Moon. Of course, that benefit has to be weighed against the immense cost – but costs aren't constant, with further technological and economic developments it is going to become more affordable.

All the groundbreaking exploration opportunities with Antarctica have already been used up, so governments don't have the same motivations there. And I think the first human visit to another planet, is going to be much more noteworthy and prestigious and memorable, than whoever was first to explore some big freezing cold island on Earth. A thousand years from now, most people will still probably remember who Neil Armstrong was; I doubt many other people from the 20th century would still be household names (I suppose Einstein and Hitler would be the other likely candidates)–its only been a century or two, but the average person has no clue whom the first explorers of Antarctica were.


why?

fungel spores probaly are contaimination by what ever probe we sent there. any life evolving their would not be from any terrestrial evolutionary branch such as fungus


All this terror about "contaminating" other objects in the solar system with terran life is just misguided. We should all hope to get some terran life established on them.

Heck, we need to build probes full of selected terran extremophiles and spray them into the Martian atmosphere.


1. Any existing life there is, at this point, highly improbable

2. If there is any, how could terran life be competitive with it if the existing life has evolved to match the local environment over billions of years?

3. If there is existing life, how could a biologist not be able to easily distinguish it from terran life?

4. If the life there is ancient and now extinct, terran life isn't going to interfere with that


To answer part 2 with just one example: The native life of Mars, if it still exists, would exist in a state of homeostasis with its environment. It would have to in order to still remain existent. If terrestrial organisms were capable of replicating under martian conditions, they could easily eat everything up and then die off. Never quite getting the time necessary to adapt to the ecological limits of their new habitat. And by this process driving the native life to extinction as well.

To answer part 3: We're still discovering new kingdoms of life on Earth (though it's unlikely we'll discover new domains). If localized panspermia exists within our solar system (from meteor impacts or the like) it's possible martian life and terrestrial life are related enough for the martian life to fit within the already existing family tree of terrestrial life. https://astronomy.com/news/2021/05/did-life-on-earth-come-fr...


2. They'd never eat all of it. Also, it the distribution of either form will never be even across the planet. There will be "islands" of one or the other.

3. Biologists are easily able to determine if they are new kingdoms are not. They're also able to estimate how long ago divergence from a common root happened.

There are many, many examples of parallel evolution in terran biology, but none of them are confused with each other. It's absurdly unlikely that a terran modern amoeba will be confused with a Martian amoeba.


2) Localized sure, I wasn't arguing about the entire planet. But introduced life could drive the native life to a local extinction. And if it did so fast enough we would never know the local life had been there.

3) Yes, I know. While this isn't my specialty I work at an organization that does have people that specialize in this. The difficulty would be in definitively concluding whether this is a native divergence that we've just never seen before, or the result of Martian evolution.


2. We've found fossilized remnants of bacteria in rocks, haven't we? There's also ice on Mars. If life existed, we'd find it frozen in the ice.

3. A billion years of evolutionary divergence, with local alien adaptations, is going to be very hard to confuse with anything brought over by a probe.


I personally agree but was responding to the claim that finding fungal spores would mean we would necessarily turn mars into a nature reserve and not touch it. i pointed out the a fungal spore wouldn't be martin but earthly in origin


>how are we supposed to build a second home on a dead, ruthlessly hostile planet until we demonstrate ourselves capable of stabilizing the biosphere and building a sustainable long-term civilization here

because its unique challenges and constraints may make us develop technology that we wouldn't otherwise that may in turn out to be useful back on earth. Much of our technological advancements come from military research, where there was no civilian demand. We developed computers so we could break encryption, we developed the internet as a successor to arpanet a military network originally made to maintain command and control in event of nuclear exchange. standardized clothes sizing was invented to make uniforms more cheaply. satellites were invented so we could spy on our military rivals. the entire space program was a spin off of ICBM program. Nuclear power came from atomic bomb research, we have developed many advaced prosthetic due to injured veterans needing them. weather prodiction thanks to radar to detect enemy planes.

But what if we could have something less self destructive than war that would germinate new technologies? Thats what a reasons to colonizes space. space colonization gives us many of the same challenges war does without the need for mass loss of life. Needs for new materials, new means of generating power, new modes of transportation. space exploration will give us new frontiers to strive against rather than find better ways to murder each-other. the ease of earth doesn't provide those challenges. I can dig up my back yard throw seed on the ground and will have vegetables to eat in the fall but i have learned nothing, find a way to grow food from lunar or martian regolith and you just have invented a way to rapidly create new soil and solved earths soil erosion problems ans well as found new ways of removing toxins from contaminated environments.


>I can dig up my back yard throw seed on the ground and will have vegetables to eat in the fall but i have learned nothing, find a way to grow food from lunar or martian regolith and you just have invented a way to rapidly create new soil and solved earths soil erosion problems ans well as found new ways of removing toxins from contaminated environments.

We already have no till agriculture. The plants terraform the soil for us. The problem with this terraformation is that you need to rotate the plants because every plant terraforms the soil in a different way. Plants don't deplete the soil unless you replant the same plant over and over again and destroy terraforming progress through tillage and harvest it and then never bring the poop back.

Some people will now say that avoiding soil depletion in the short term is a bad idea because it means using a little bit more land (or bring up incorrect numbers).


Agreed 1000000% and this argument bugs the fuck out of me.

If we can't successfully terraform Earth how the fuck are we going to terraform Mars?


It goes both ways. Learning to live on a dead world like Mars (or better yet, off-world altogether) will necessarily entail significant improvements in recycling, atmospheric control, and energy management. Those same technologies could be critical to reversing the damage we've done to our homeworld and enabling us to live on it sustainably.


> sustainable long-term civilization here Earth is easy mode compared to Mars

Which is easier, building a colony on Mars or solving politics? I rest my case. Mars, here we come!


Do you imagine a Mars colony will not have its own politics? Politics is arguably motivated in large part by scarcity. On Mars resources are a lot harder to come by than on Earth.

You could make an argument that a shared struggle against extreme conditions would stabilize societies and make cooperation a necessity, but a Mars colony is going to need a lot of help from Earth to get on its feet, in which case we still need to solve terrestrial politics anyway.


> Do you imagine a Mars colony will not have its own politics

A Mars colony would be small by necessity. Everyone will know everyone else. In such communities "politics" is not nearly so abstract. For instance, you can't convince yourself that climate change is not happening when you personally know the experts in that field taking the measurements and crunching the data. There will always be disagreements but nowhere near the type of nonsense we see here on Earth.


Pretty much. I want to colonize Mars, but not as a solution to apocalypse scenarios. Long-long term it would help, but yea, it isn't the optimal solution to solve current problems we are facing now. Still want to colonize it, but just because it would be cool. Whether we take 5 years or 150 years to do it or 1,000 years to do it, doesn't bother me. Although doing it in the next 30 years would mean there is a good chance I could see it happen, but thats about it.


One big problem with Mars is that requires technology to live on. Most of the risks are civilization ending events not extinction level events. When civilization fails, on Earth the survivors bang rocks together, on Mars they die. It turns civilization ending into extinction.

One big question is if can make Mars civilization that would survive Earth collapse. It is possible that there are some things, like biological samples or advanced technology that have to come from mother planet. Mars will likely take a long time to become self-sufficient and until then it isn’t a backup. The easier self-sufficiency is, the less Mars is needed as backup.

The final thing is that colonizing Mars could introduce risks. It would involve developing technology that makes disaster more likely, like advanced AI, genetic engineering, or moving asteroids in space. Or could be adding a place for conflict leading to Earth-Mars war that destroys both planets.


> Most plausible post-apocalyptic Earths would be far easier to live on than Mars.

Most, but not all right? So then you agree that there is still a case for colonies on other worlds, if only to cover those less plausible scenarios.


I understand their arguments just fine. I just don't think they make any sense.

Ought implies can. We cannot predict the far future of humanity. We cannot colonize other planets in the foreseeable future. We cannot plan how to handle future technology that we aren't yet sure is even possible.

The things we actually can predict and control, like global warming and natural disasters and pandemics, are handled with regular old public policy. Longtermism, almost by definition, refers to things we can neither predict nor control.


>Are they doing it badly, or are you not understanding their arguments?

Do YOU not understand their arguments? They are facially stupid. The notion that we should be colonizing mars because of global warming is the stupidest thing I've ever read or heard.


>The notion that we should be colonizing mars because of global warming is the stupidest thing I've ever read or heard.

Yeah, because that's a strawman you imagined in your head. I'm not sure what gave you the impression that the two were related (other than that they're competing options) based on my previous comment.


> Yeah, because that's a strawman you imagined in your head.

I've read enough of that argument being made in earnest on this forum that I'm going to have to go with the parent poster.

There are many intelligent people who seriously believe in that strawman. (Although it's also possible that they don't actually believe in it, and are just making the argument because the purpose of colonizing Mars isn't increasing the resilience of Earth, but getting away from the hoi polloi. Those people will be for a rude surprise when they discover that they are also part of the hoi polloi.)


Someone posted it upthread. You can replace any catastrophic event with global warming and it's just as facially stupid. Literally like the thought process of a child. It's completely divorced from reality.


While I don't expect extinction from any particular given cause — and definitely not from any of global warming, nuclear war, loss of biodiversity, peak phosphorous, or the ozone layer — humans have a few massive failure modes:

1. We refuse to believe very bad scenarios until much too late. Doesn't need to be apocalyptic: The Titanic isn't sinking; all of Hiroshima must have gone silent because a telegraph cable was damaged and it can't possibly be the entire city destroyed, and even if it was the Americans can't possibly repeat it; the Cultural Revolution cannot fail; the King can't be executed for treason by his own parliament; the general can't cross the Rubicon; Brutus can't betray me.

I think many of those things would have been dismissed the way you're doing now.

2. Tech is changing. I don't expect extinction from a natural pandemic, but from an artificial one is plausible; not from a natural impact event, but artificial is… not yet, but no harder than creating a Mars colony; propaganda has already started multiple genocide attempts, what happens when two independent campaigns are started at the same time when both groups want to genocide everyone not in their group?

The same risks would still be present on Mars, and the only way I see around the deliberate impact risk is space habitats which have their own different set of problems (given we can't coordinate on greenhouse gases I see no chance of us coordinating on Kesler syndrome either in cis-Luna nor in Dyson swarm scenarios).

I don't have any solutions here, though.


My money is on the quiet failure mode. The demographic collapses we see happening around the world continue and spread as more people have the resources to live individually, without family. Through automation we overcome the economic issues caused by population inversion, leisure is the norm, ambitions are confined to personal goals, and the human species coasts comfortably down to nothing.


I think that direction will rapidly lead to people of the "Quiverfull" attitude (not necessarily literally in the Christian group of that name) becoming dominant.


> what happens when two independent campaigns are started at the same time when both groups want to genocide everyone not in their group?

In the awful real-world history of genocide, I don’t think “we want to genocide everyone except for ourselves” has ever actually happened. Genocide is always targeted against certain groups, with others left alone. I remember someone here saying that “Nazis wanted to kill all minorities”-but that’s historically false, we all know how they sought to exterminate some minorities, what is far less well-known is how they actually promoted and even improved the rights of others, which they saw much more favourably-such as Frisians and Bretons. “Let’s genocide everyone except for ourselves” is the kind of policy which cartoon Nazis would adopt but no one in the real-world ever has. I suppose something genuinely new could happen, but it doesn’t seem particularly likely-far less likely than the sad near-inevitability of future genocides (of the targeted kind with which we are familiar)


> The notion that we should be colonizing mars because of global warming is the stupidest thing I've ever read or heard.

Right, so you don't understand their arguments then, thanks for clearing that up. Global warming is only an additional reason, not the only or main reason.


Why Mars though? Why not colonize the Gebi Desert first?


Presumably because the goal is to survive something bad that happens to Earth. If you're on Mars (and self-sustaining...), that's no big deal. If you're in the Gobi Desert, you're going to be the first people to get wiped out by whatever happens to Earth.


x-risk is existential risk, as in humans get wiped out. Some big ones are meteor impact, nuclear war and disease. The risk of those things ending all of humanity are greatly reduced with a second planet. They're not reduced with a desert colony.


> The risk of those things ending all of humanity are greatly reduced with a second planet.

I can imagine a situation where that's true. But right now, for almost any situation, a series of super-bunkers is orders of magnitude cheaper and more effective. A lot of ridiculously destructive things can happen to Earth and it will still be a better place to live than Mars.


Yeah you can come to different conclusions than colonizing Mars being a good strategy for human survival. I'm just answering the "why not the desert?" question.


our earth has had impact events that no bunker would save us from. like the one that created the moon or the much smaller impact that created the Borealis Basin on mars would boil the oceans and melt the surface.


Th early solar system was very different with way more debris including large planetismals. The planetismals caused the big impacts you mentioned, and the smaller stuff caused the impacts can see on other bodies.

The solar system is much cleaner place now. All the planetismals and most of the asteroids have impacted or been kicked out. Big things are in stable orbits. There are a lot of dangerous asteroids but we track most of the large ones. There is a risk that something big will be kicked out of orbit but it is rare enough we don’t know how unlikely.

Large impacts, 5km or bigger, are every 20 million years.


It would be massively cheaper and faster to robotically colonize near-Earth space and get really, really good at killer asteroid detection and redirection.


Ok but redirection capability must be abundant enough that outright sabotage and terrorism can be fixed by another nation or else you will have an increase in extinction risk.


None of those things would make Earth less hospitable than Mars. A desert colony would still be better off than trying to survive on Mars, particularly once Earth's resources are cutoff. Mars is far more hostile than anything likely to happen to Earth over the next hundred million years.


It's not about hospitable. It's about survivable. There are large enough meteor strikes where you'd be better off on a self-sustaining Mars colony than anywhere on Earth.


Unless you're personally in the strike zone, that's not true.

And one should also bear in mind that Mars is at no less risk of meteor impacts than Earth.


It’s not about Mars being lower risk but independent risk. Someone could decide to keep copies of important documents in their vacation home not because it’s less likely to have a fire, but because it’s less likely for both houses to have a fire.

I used the wrong word when I said meteor. They’re too small. A comet or asteroid of 100km diameter would raise the temperature of the surface of the Earth by hundreds of degrees and then there’d be decades of darkness. https://www.sciencedirect.com/science/article/pii/S001632872...


A Mars colony that simply puts humans on Mars is decades away.

A Mars colony that puts a whole parallel self sufficient society on Mars with their own semiconductor manufacturing and so on is millenia away.


I would wager that if we get a established research colony on Mars, we're 100 years away from a mostly self-sufficient small city.

We're not going to have Mars colonized in a decade or two, but it's not going to take a thousand years, either. Probably. I'd say thriving colonies within a century or two.


> We have no idea how likely "existential risks" are.

We absolutely do. We have quantified the risks of supervolcanoes, asteroid impacts and coronal mass ejections. We continue to quantify the ongoing risk of climate change and nuclear war (how many minutes to midnight?). The real open questions are the likelihood of a biological agent (natural or engineered), and AI.

The fact that we don't know the risks should make you more worried about those. Unknown unknowns can bite you in the ass hard with no warning. Maybe we should figure out some warning signs.

> We have no idea what, if anything, can be done about them.

That's what research is for. Sounds like we maybe we should fund some research into these issues.


Yep. But if you only need to solve problems in what’s effectively your own science fiction novel, you’ll never fail and don’t have to face the very annoying practical problems that keep stuff from getting done in the real world.

Which is why everyone has a fantastic zombie apocalypse plan but no realistic ideas to to address their local non-zombie violent crime.


> It's not that that bothers me so much as the fact that many effective altruists do it so badly.

I feel the same about Rationalists and rationality. They even had an excellent approach with their motto: "We're only aspiring rationalists", but when you remind them of that motto in the process of them being not actually rational, it has no effect.

There's got to be a way to ~solve something that is so in your face, like right there in an argument, the very essence, but it is a very tricky phenomenon, it always finds a way to slip out of any corner you back it into.


Oh, I agree! I didn't mean to imply that being concerned with the future isn't critically important. It is. I like how you put it better -- it's that they do it so badly.


I never give to any group that asks me for money.

Far be it from me to second-guess anybody's giving (motes and beams and all that) but this rules out many of the most effective aid organizations, all of which are absolutely off-the-charts obnoxious about fundraising --- because it works.


> many of the most effective aid organizations, all of which are absolutely off-the-charts obnoxious about fundraising

This doesn't seem to jive much with what's reported by charity evaluators like GiveWell, or with what kinds of charitable organizations get grants from more traditional but still high-impact philanthropies like the B&MGF.

It's quite plausible that too much emphasis on fund raising among the general public distorts incentives within these charities and makes them less likely to be highly effective on average. If so, we're better off when the job of publicly raising charitable donations is spun off to separate organizations, such as GiveWell or more generally the EA movement itself.


Fundraising expenses are a huge problem with large charities, but it doesn't follow that fundraising annoyingness is a huge problem. It's not a customer service problem with donors; it's a "using too much of proceeds on fundraising" problem.


If an organization believes spending a marginal dollar of money on their programs is the best way to improve the world, then spending $10 to get $11 in donations allows them to spend an extra dollar on it. It's rational and even morally required. (The only potential negative being the extent that winning a contribution crowds out funding from other causes.)

More generally, people overly emphasize low administrative expenses as a sign of quality. You need overhead to effectively administrate and evaluate programs.


I don't want to get tangled up in abstractions here. To a decent first approximation, every large charity well-reviewed by Charity Navigator (or the like) fundraises from past donors aggressively. It would be a red flag if they weren't annoying previous donors. Empirically, the idea of "never giving money to organizations that ask for money" is likely to steer you away from the most effective aid organizations.


> seem to jive

jibe. yes, words change and evolve, but I only mention it because to jive has another meaning, to BS somebody.

I agree with your overall point about "I won't give to groups who tell me they need money" is a pretty high bar to set. However, GP's comment is in keeping with something I've come to think, which is organizations will re-form themselves around your donations (I give large amounts because I can afford them) and they'll befriend you, and it beomes a difficult situation to extricate yourself from. I tend to do one-time gifts and then move on.


>jibe

Or gibe. The problem is jibe has negative connotations, whereas 'to jive with' seems to me to be a metaphor to works, (I assume it's used in the dancing sense?).

I don't know of the meaning of the phrase has changed somewhat, 'to jibe with' suggests a sarcastic undertone to me, but in modern usage, no sarcasm is intended so maybe jive is the correct term for the current usage of the term 'to jX with'


Jibing is a nautical term, referring to a sailing boat turning through the wind so that the boom flips from one side of the boat to the other.

I have no idea how it acquired the perjorative meaning of making unpleasant remarks.


It looks interesting: "jibe" in the pejorative sense is both an (understandable) alternate spelling of "gibe", and also probably shares a root with "gibe" --- both probably stem from a word that means "rough handling", "kick", or "rear up".


True, that's the point. It's a necessary self-protective stance. Before I adopted that rule (and the secrecy rule), giving money resulted in me being hounded incessantly for money from every other group under the sun. I think I got it worse than many because I tend to give large amounts. It was a nightmare.

I'll never make that mistake again.


Another writer I like, discussing exactly this problem, referred to it as "the quantum unit of sacrifice". It is annoying. Very annoying! But like, that's all it is.


It was more than just annoying. It cost me time and energy.


I distinctly remember signing up to give £5 a month to a charity on the condition that if they ever contacted me again asking me for more money I'd immediately cancel my Direct Debit. They didn't even get their second payment.


Honestly, the £5 a month you were giving them is probably less than the cost of special casing you in their largely automated donor marketing systems so this is a net win to both of you.


And yet it's worth it for them to have people canvassing door-to-door for those £5 a month donations with a <10% conversion rate. I guess the difference is that those people don't get paid, and the administrators do.

EDIT: Unless they literally don't care about the base rate donators at all. They only exist so that some of them will get converted into higher rate donators later.


Yeah. By contrast if OP had donated 100K I would be interested in the outcome...


If you can't unsubscribe it is spam and illegal.


I suppose the argument could be that the charities who aren't savvy enough to play ball are the ones who could use the attention. Small, hyperlocal charities might not have the resources for a dedicated marketer in the first place, and even if they're less "effective" from a global optimization perspective, most people probably get greater utility from donating to local causes.


This is precisely my reasoning. The larger charities don't need me. Also, this is a way, in large part, to give back to my local community -- the people who supported (and support) my business efforts.

There are many ways to measure impact. I choose to measure it locally.


If it works for those orgs, then those orgs don't need his money anyway.


The first point was:

I always give to a local group who directly helps people and who is typically overlooked for charitable giving. I get to know the group pretty well first.

So maybe they are specifically looking for grass roots organisations that do good work but are less able to find raise.


Not a problem. There are lots of trees in the forest. Those folks can go their merry way, while some of the rest of us have different goals, personal as well as noble.

And who's to say 'most effective'. I used to think Red Cross was one of those, until they got caught driving around randomly during Katrina was it? Doing nothing, but spreading their brand. Or so I recall. If I got that wrong I apologize, but my point is bigger means less transparent. For instance, those big effective organizations are spending a butt-ton on fundraising.


Nobody who has paid attention to aid organizations over the last 2 decades believed the Red Cross was effective, just for what it's worth. The Red Cross is practically the entire motivation for sites like Charity Navigator.


> because it works

apparently not!


> Effective Altruism is just a modern iteration of a thing that's been around for a very long time.

Which you think is what, exactly? I'm under the impression that thing is warmed-over utilitarianism.

> The fundamental idea is sound.

I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things. However the framework appeals very strongly to "rationalist" type people.


It's not utilitarianism, it's a scam.

It's a group of people persuading themselves they're special and entitled, because of course they are, and then trying to sell that line - financially, psychologically, sometimes politically - to themselves and others.

Which is not a new thing in any way at all. The wrapping changes, the psychological games don't.

I have a rule of thumb which is that if you want to understand a movement, organisation, team, social or personal relationship, or any other grouping, the messaging and the stated purpose are largely irrelevant. The real tell is the quality of the relationships - internally, and with outsiders.

If there's a lot of entitlement and rhetorical myth-making + grandiosity happening expect some serious dysfunction, and very likely non-congruence and hypocrisy.


>I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things.

Right, but that doesn't mean that we shouldn't care about consequences at all. There's a pretty big gap between "given that we have scarce resources, we should maximize the impact of our use of it" and "committing this atrocity is fine because the utility calculations work out".


The reason EA seems like it's a form of utilitarianism is of course the association with the so-called rationalist community. As you note, it's very appealing to that type of person. This is partly because the math used to rigorously compare consequences seems easy, and partly because utilitarianism has a lot of good places to hide your subjective value judgements.

You can apply EA-like concepts with any sort of consequentialist ethics. E.g. the Rawlsan veil of ignorance can work -- would hypothetical-me rather reduce his chance of dying of malaria by X%, or reduce his chance of malnutrition by Y%? It's just harder to explain why you rank one course of action over another, and therefore you're probably not going to be able to centralize the decision making.

This isn't because it's somehow unsound[0]. It's because it's harder (though not impossible) to explain with math, and the subjective value judgements are right in your face rather than hidden in concepts like utility functions.

[0]- It might be; you might not accept the premise of the veil of ignorance. That's not the reason it seems trickier than the utilitarian version, which has the same problem.


The "10% of lifetime income to charity" pledge is pretty close to Christian tithing, Islamic zakat, and suchlike. Who also claim to be spending donations to help the poorest people in society, and with low waste.

Of course, EA has a bunch of other weird stuff like AI safety, which isn't an idea that's been around for millennia.


Well, actually, on AI safety: https://en.m.wikipedia.org/wiki/Golem


When you make that juxtaposition, the idea that you must obey ridiculous rules in order to placate an invisible omnipotent being does seem to have religious analogs.


> I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things.

I think most moral philosophers agree with you on that.

IMO, the way people talk about Utilitarianism feels to me exactly the same as my own feelings when at school, having only been taught 2D vectors and basic trig functions, I spent 6 months trying to figure out what it even meant to render in 3D — in particular the sense of pride when I arrived at two useful frameworks (which I later learned were unshaded 3D projection and ray-marching).

By analogy: while that's a good start, there's a long way from that to even Doom/Marathon let alone modern rendering; similarly while Utilitarianism is a good start, it's only saying "good and bad can be quantified and added" and falls over very quickly when you involve even moderately sized groups of spherical-cow-in-a-vacuum people with finite capacity for experiencing utils. It also very definitely can't tell you what you should value, because of the is/ought divide.

Once your model for people is complex enough to admit that some of them are into BDSM, I don't think the original model of simple pluses and minuses can even ascribe a particular util level any more.


Utility can't be one dimensional and it probably isn't linear.

In other words, we would have to treat most situations as unique and then try to find patterns and then the whole rationalism thing goes out of the window.


Only spherical-rationalism-in-a-vacuum goes out the window.

Unfortunately, and this goes back to my previous point, lots of people (I'm not immune!) mistake taking the first baby step for climbing the whole mountain.

I'm reminded of the joke about a an engineer, a physicist, and a theoretical mathematician who each wake up to a fire in their bedrooms: https://old.reddit.com/r/MathJokes/comments/j8bax6/an_engine...


I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things. However the framework appeals very strongly to "rationalist" type people.

If it sounds horrible, then it's probably is?

The logical chain of hurting a human leading to helping two human doesn't sound like something that is moral or dependable.

Giving to charities that focus on the most severe and urgent problem of humanity is a very straightforward idea.

However, not all charities are focused on the most urgent problems. For example, a local charity I frequent does improv and comedy theater, hardly 'urgent' needs. People don't like to hear is that they could donate money to a third world NGO providing vaccines, or fighting corruption in third world countries instead of their local church or community theater.

Don't get me wrong, community theaters/churches/etc are good things. They just aren't saving lives.


Every ethical theory is a mess of contradictions but throwing ethics out entirely isn’t the right answer. As for hurting someone to help others, sometimes someone needs to kill someone like Hitler for the greater good.


> I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things.

This was the point I was making, yes. The idea itself isn't logically faulty, but it is easy to subvert.


There was a study a while ago that said people who accept utilitarianism are more likely to have psychopathic traits.


Based. Utilitarianism is cringe, these guys need to read Kant or something.


On the subject of charities being annoying with you, there's a simple answer:

Start a donor-advised fund (DAF) with your financial institution. Most have one, and some large charities (e.g. Silicon Valley Charitable Foundation) have their own.

You fund it with cash or appreciated stock. For the latter, you can take a tax deduction that year for the entire market value.

You can't get the money back (that's why it was deductible). You can "advise" that they make a grant to some 501(c)(3) charity, and from my experience they always do, after due diligence.

Here's why they're not annoying: you can give anonymously if you want. They can't bother you because they don't know who you are.

The one I have allows you to name a "successor trustee" who takes over as advisor if I'm gone.


> I never give to any group that asks me for money.

I get how this would keep you personally from being annoyed, but it seems to incentivize in worse outcomes. "Let's collect all the money we can, we never know if we'll get more. Let's grow that reserve" vs. "In a bad month we can get our usual donors/JohnFen to give us his annual donation a little early".


Perhaps, although I haven't seen that. But I also have to deal with the limits of what I can tolerate. That rule make it possible for me to give money. Without it, I wouldn't. I experienced what it was like without that protection, and it's beyond what I could put up with.


It's both possible that the rule both is required for you to give and forces charities into suboptimal strategies. It makes sense to protect your sanity, but not if your intent was to filter charities by need/administration.


> I get to know the group pretty well first.

I think this is a very, very, very important step. What's more, it's a step that I don't think can be outsourced to someone else, which is why I'm skeptical about claims by, among others, the Effective Altruism movement, to be able to do this kind of thing on your behalf.


>What's more, it's a step that I don't think can be outsourced to someone else, which is why I'm skeptical about claims by, among others, the Effective Altruism movement, to be able to do this kind of thing on your behalf.

Why can't this be done? Society in general outsources due diligence to third parties all the time. Banks outsource credit worthiness assessments to credit bureaus. Passive investors outsource price discovery to other market participants. Online shoppers outsource quality control to reviewers. I agree that there's no substitute for doing it yourself, but it's simply not realistic in many cases to do the due diligence yourself. Even if you do it yourself, there's no guarantee that you'll do a better job than the professionals.


It sounds like a nice idea, but I don't think it's a very practical one.

It drives away anyone who wants to give but is unable or unwilling to devote the kind of time necessary to do this kind of in-depth research (the amount of which only goes up the more money you want to give away).


The problem with the "sacrifice the present for the long term thinking" over ridiculous time scales beyond 100 years is the concept of diminishing returns.

Anything around 3% or more endless annual growth must invent faster than light travel within less than a millenium and then exclusively use it to colonize multiple galaxies.

What this tells you is that we are going to drown in capital within the next two thousand years and probably launch a few space missions toward neighboring solar systems but all of those things are closer to 0% annual growth than 3%.


A 'sound idea' that devolves into terrible things whenever applied is not a sound idea.


That's a very good point. You (and the other commenters making the same point) have convinced me. I should find a better way of phrasing.


true. and those of us that are not psychopaths dont even have to think about it. it just happens.


> Effective Altruism is just a modern iteration of a thing that's been around for a very long time. The fundamental idea is sound.

It's largely a reinvention of a philosophical program called logical positivism, which the philosophers gave up on because it didn't work, so it's not sound. They just brought it back because it's the most STEMy kind of philosophy and they think anything with enough math must be right.

(Example of them not reading the literature is their invented term "steelman", which academia already had and called "reconstructing arguments".)


Effective Altruism is not a reinvention of logical positivism.

Perhaps you mean rationalism (in the internet-rationalist sense; there are others) which is closely associated with EA but not at all the same thing, and is somewhat like logical positivism.


I will believe people claiming these are different things when the EAs stop talking about AI ending the world and other things they read in SF novels and go back to malaria nets.


GiveWell's recommended charities are all doing health interventions in poor countries. The largest category in Good Ventures's grants, by more than 2x, is "global health and development". Malaria nets and the like are, in fact, most of what EAs are doing.

Those things are less controversial and exciting to talk about than (say) whether there's a real danger of AI wrecking the world, so if you focus on what's talked about in the EA Forum or something then you will get a misleading impression.


It's really just utilitarianism, all over again.


Yes it's definitely utilitarianism. William McAskill (co-founder of the center for effective altrurism) for example wrote Doing Good Better: Effective Altruism and a Radical Way to Make a Difference and also is a co-author of https://www.utilitarianism.net/ (an introductory online textbook on utilitarianism).


About fifteen years ago I got involved with the Skepticism movement. It was great to meet so many seemingly rational people and there was excitement that the movement seemed on the verge of growing large enough to make a paradigm shift in how society acts. But after about a year, I started to really sour on it as I could see more and more things becoming rational by decree and thus beyond question. These were supposed to be skeptics, but they were getting more and more into group think, especially on adopting woo-woo views on medical matters.

I had never been a core figure in the local group, so it was easy enough for me to melt away unnoticed but one of my friends was an organizer and regular lecturer. When she finally decided to leave over exhaustion from increasingly abrasive and hostile tone of the group to any dissent, she was harassed for months. The group was seemingly offended that she rejected their brilliance and become an apostate.

I think they eventually got subsumed by the Atheism+ movement, which then imploded.


I have a lot of sympathy for skeptics but this is the first thing that comes to mind the idea of Skepticism as a movement comes up:

A few years ago, he told me, he went to a skeptics’ conference in La Coruña, Spain. He was walking down some stairs one afternoon, not long after investigating the statue of a local saint, which was said to protect those who embrace it, when his left leg suddenly crumpled beneath him. “It wasn’t like I fell and broke my leg,” he said. “It was more like I broke my leg and fell.” The other skeptics gathered around as he writhed in agony. When he told them, between gasps, that he thought he had broken his leg, they were dubious. “You know, that might just be a sprain,” one of them suggested. Another told him to try wiggling his toes. It wasn’t until Nickell lifted his leg, revealing that it was bent at a grotesque angle to his foot, that they believed him.

https://www.newyorker.com/magazine/2002/12/23/waiting-for-gh...


I was pretty into that level of skepticism but all it really showed me is that people are natural "believers".

We are ultimately story telling apes. To believe we are going to stop telling each other and ourselves fictional stories is the height of delusion. There is this highly negative aspect too when you start believing the fictional story is actually non-fiction. That opens up all kinds of additional problems that is actually solved at the society level by believing in fiction like religion.

I am a total atheist but it seems really stupid to me to not see the role fiction plays in coordinating society. That level of skepticism was overly focused on the individual with near disregard for collections of individuals at the level of society.


There was an article a while back that the atheism+ movement vanished because it was integrated into the social justice movement.

What do you think of that? Is the atheism+ still around? What happened to its people?


"Rational by decree" reminds me of Ayn Rand's "the Collective", which very quickly turned into a similar kind of cult where people would be berated for liking "objectively bad" art etc.


> EA appeals to exactly that kind of really-smart-person who is perfectly capable of convincing themselves that they're always right about everything. And from there, you can justify all kinds of terrible things.

Yup.

Which is super-ironic given the association with big-R Rationality, Less Wrong, Overcoming Bias, all of which quote Feynman saying "The first principle is that you must not fool yourself, and you are the easiest person to fool."

Now I have the mental image of the scene in The Life of Brian where the crowd mindlessly parrots Brian's call for them to think for themselves.


Your point seem superficially valid, but where do we go from there?

>The worst thing about being smart is how easy it is to talk yourself into believing just about anything. After all, you make really good arguments.

>EA appeals to exactly that kind of really-smart-person who is perfectly capable of convincing themselves that they're always right about everything. And from there, you can justify all kinds of terrible things.

Should we not talk ourselves into believing into stuff? Should smart people specifically avoid changing their beliefs out of fear of "justify all kinds of terrible things"?

>I'd love to believe in effective altruism. I already know that my money is more effective in the hands of a food bank than giving people food myself. I'd love to think that could scale. It would be great to have smarter, better-informed people vetting things. But I don't have any reason to trust them -- in part because I know too many of the type of people who get involved and aren't trustworthy.

So you don't trust donating money to food banks or malaria nets because "don't have any reason to trust them", then what? Don't donate any money at all? Give up trying to maximize impact and donate to whatever you feel like donating to?


> Should we not talk ourselves into believing into stuff? Should smart people specifically avoid changing their beliefs out of fear of "justify all kinds of terrible things"?

It's simple really: just be skeptical of your own reasoning because you're aware of your own biases and fallibility. Be a good scientist and be open to being wrong.

> So you don't trust donating money to food banks or malaria nets because "don't have any reason to trust them", then what?

No, they don't trust that you can scale the concept of "food banks are more effective than I am" to any kind of maximization. You can still donate to worthy causes and effective organizations.

> Don't donate any money at all? Give up trying to maximize impact and donate to whatever you feel like donating to?

Yeah, basically. Giving is more helpful than not giving, so even a non-maximalist approach is better than nothing. Perfect is the enemy of good, aim for good.


>It's simple really: just be skeptical of your own reasoning because you're aware of your own biases and fallibility. Be a good scientist and be open to being wrong.

This just seems like a generic advice to me which is theoretically applicable to everyone. Is there any evidence of effective altruists not doing that, or this being specifically a problem with "really-smart-person"s?

>No, they don't trust that you can scale the concept of "food banks are more effective than I am" to any kind of maximization. You can still donate to worthy causes and effective organizations.

I'm not quite understanding what you're arguing for here. Are you saying that you disagree with effective altruists' assessment that you should be funding malaria nets in africa or whatever (ie. what they want you to do), rather than donating to local food banks (ie. what you want to do)?

>Yeah, basically. Giving is more helpful than not giving, so even a non-maximalist approach is better than nothing. Perfect is the enemy of good, aim for good.

To be clear, you're arguing for donating for whatever your gut tells you, rather than trying to maximize benefit?


> To be clear, you're arguing for donating for whatever your gut tells you, rather than trying to maximize benefit?

Yes. Don't overthink it and end up trying to colonize Mars instead of buying the homeless guy down the street a sandwich.


But where do you draw the line at "overthinking"? I agree that "don't help the homeless guy down the street in favor of funding AI alignment research" is a bit unintuitive, but keep in mind that "don't help homeless guy in favor of helping 100 random guys in africa" is also unintuitive (at least to the extent that we needed a whole movement to popularize it). I'm not saying that AI alignment research is actually the most worthwhile cause to fund, but "convincing people to donate to unintuitive, but theoretically greater utility projects" is basically the reason why effective altruism even exists. If people already naturally donated to the highest impact charities rather than donating to their alumni or the local opera house, we wouldn't need EA because that would be the default.


To quote the movie Stalker,

'My conscience wants vegetarianism to win over the world. And my subconscious is yearning for a piece of juicy meat. But what do I want?'


>> EA appeals to exactly that kind of really-smart-person who is perfectly capable of convincing themselves that they're always right about everything. And from there, you can justify all kinds of terrible things.

> Should we not talk ourselves into believing into stuff? Should smart people specifically avoid changing their beliefs out of fear of "justify all kinds of terrible things"?

The GP is talking about self-deception. And yes, we should not deceive ourselves.


Okay, but how does this translate into actionable advice? Nobody sets out to intentionally deceive themselves. Telling people that "we should not deceive ourselves" is basically as helpful as "don't be wrong".


One approach is to turn up the demand for intellectual rigor until it hurts. For example, I don't want to become a maths crank, so I am learning about computer proof checkers, Lean, ACL2, Coq, those things. But they are painful to use, it hurts!

To broaden the applicability, consider that a clever person goes through life with their bullshit checker set to medium. Normies try to persuade him of stuff, and he quickly spots the flaws in their arguments. But a clever person doesn't lock the setting at medium. They adjust it to suit their adversary. If it is a negotiation for a large business deal, the skepticism gets turned up to a high level.

One can imagine a situation in which an executive, Mr E, for company A discovers that company B has hired away one of company A's due diligence team. Whoops! Mr E thinks "They know what we look for, and will have made very sure that it looks good." One adjusts the setting of ones bullshit detector not just on the raw ability of ones adversary, but also to whether they have access to your thought processes, that they can use to outwit you.

Assume for the sake of argument that the previous paragraph is for real. Adjusting your bullshit detectors to allow for an adversary reading your secrets is an option. Then it leads to actionable advice.

How do you set your bullshit detector when you are trying to avoid deceiving yourself? You use the settings that you use when you fear that an adversary has got inside your head and knows how to craft an argument to exploit your weak spots.


How about this: in swinger communities, they have safe words that they can use to transcend the playtime aspect of reality - how about we develop something similar for internet arguments, a term that is mutually agreed upon in advance, and when uttered by a participant it kicks off a well documented (and agreed upon) protocol where all participants downgrade System 1 thinking to zero and upgrade System 2 thinking to 11...and, all participants carefully monitor each other to ensure that people are executing the agreed upon plan successfully?

This general approach works quite well (with practice) in many other domains, maybe it would also work for arguments/beliefs.


Wouldn't this devolve into name calling almost immediately? On internet arguments it's already implied that you're bringing forth logical points and not just spouting off what you feel in the heat of the moment. Invoking the safe word is basically a thinly veiled attempt at calling the other party irrational and emotional.


> Wouldn't this devolve into name calling almost immediately?

If it did, 1 week ban. Do it again: 2 week ban. All according to the well explained and agreed upon on a point by point basis in the terms of service that the user agreed to...except in this case, the TOS are actually serious, and not the colloquial meaning of "serious" that people have grown accustomed to.

This would create substantial gnashing of teeth, but I anticipate before too long a few people would be able to figure it out (perhaps by RTFM), demonstrate how to speak and think properly, and a new norm would be established.

Besides: if 95% of people simply cannot cut it, I don't see this as a major problem. Cream is valuable, milk is more trouble than it's worth.

Two things that should never be underestimated:

- the stupidity of humans

- the ability of humans to learn

> On internet arguments it's already implied that you're bringing forth logical points and not just spouting off what you feel in the heat of the moment.

It's even worse: it is perceived as such! This is the problem though: people have never been taught how to reliably distinguish between opinions, "facts", facts, and the unknown (the latter is typically what catches genuinely smart people). So: offer an educational component, maybe integrated into the onboarding process.

Too big of a hassle? Best of luck to you elsewhere (provide links to Reddit, Facebook, Hacker News, etc).

> Invoking the safe word is basically a thinly veiled attempt at calling the other party irrational and emotional.

Take a wild guess what response a comment of this epistemic quality (in the form that it is currently presented) would elicit under the standards I describe above.

Besides: I doubt any unemotional, rational people exist on the planet. It is not a question of "if" someone has these shortcomings, it is a question of "to what degree" they suffer from them. And should we expect any different from people? We don't try to create any of these people, and it's not like they have anyone to emulate.


> how about we develop something similar for internet arguments, a term that is mutually agreed upon in advance, and when uttered by a participant it kicks off a well documented (and agreed upon) protocol where all participants downgrade System 1 thinking to zero and upgrade System 2 thinking to 11

This used to be Godwin's law. Except by the time that's triggered, your System 2 thinking dialed you to 11 almost always tells everyone it's time to leave that discussion.


> This used to be Godwin's law.

Hmmm, let's see:

"Godwin's law, short for Godwin's law (or rule) of Nazi analogies,[1] is an Internet adage asserting that as an online discussion grows longer (regardless of topic or scope), the probability of a comparison to Nazis or Adolf Hitler approaches 1.[2]"

"a term that is mutually agreed upon in advance, and when uttered by a participant it kicks off a well documented (and agreed upon) protocol where all participants downgrade System 1 thinking to zero and upgrade System 2 thinking to 11"

I see very little similarity between these two things.

> Except by the time that's triggered, your System 2 thinking dialed you to 11 almost always tells everyone it's time to leave that discussion.

From my other comment: "if 95% of people simply cannot cut it, I don't see this as a major problem. Cream is valuable, milk is more trouble than it's worth."

Lots of people will leave, but there will be some who remain. It's a similar principle to quality standards when joining various organizations, or any other process that involves targeted selection.


> I see very little similarity between these two things.

I'm not sure what isn't clear: the trigger condition you're looking for is the first comparison to Nazis is made. Except as I said, by the time that point is reached I'm not sure productive discussion is possible.


> I'm not sure what isn't clear: the trigger condition you're looking for is the first comparison to Nazis is made.

I'm baffled, what are you referring to here? Why am I looking for a trigger of a comparison to Nazis?


You literally suggested:

> a term that is mutually agreed upon in advance, and when uttered by a participant it kicks off a well documented (and agreed upon) protocol ...

That term is a trigger condition for initiating a protocol. I suggested that the first comparison to Nazis is the trigger condition. What's unclear here?


I think maybe the problem is that you seem to be classifying a highly specific concrete instance of a very broad abstract class as being equal to the abstract class itself (and thus: equal to all possible subordinate concrete classes).

>> how about we develop something similar for internet arguments, a term that is mutually agreed upon in advance, and when uttered by a participant it kicks off a well documented (and agreed upon) protocol where all participants downgrade System 1 thinking to zero and upgrade System 2 thinking to 11

> This used to be Godwin's law.

In this case, playing of the Godwin's Lawcard would invoke my recommended process, as opposed to playing of the Godwin's Lawcard being an instance of my recommended process.

> Except by the time that's triggered, your System 2 thinking dialed you to 11 almost always tells everyone it's time to leave that discussion.

Again, this claim would invoke my process, and would be accompanied by a reminder that it isn't actually possible to read minds or the future, it only seems like it is possible when running on System 1 heuristics.

But then, both my logic and intuition suggests to me that what's going on here is that you and I are talking past each other, and if we were to eliminate all of the numerous flaws in communication (for example: your usage of "the" trigger condition instead of "a" trigger condition[1]), we'd discover we don't actually disagree very much. But ain't no one got time for that, under current protocols.

[1] A common response to this style of complaint ("pedantry") is that one should simply assume [the correct intended] meaning - but again, this has a dependency on mind reading, which is a false premise (that seems true during realtime cognition).


How about "don't go against conventional wisdom unless you have a good reason; the more conventional the wisdom, the better the reason needs to be"? Possibly combined with "be humble and give subject matter experts some credit, an hour on Google Scholar doesn't mean you've learned everything".

If the conventional wisdom is "don't order research chemicals from a lab in China then self-inject them", then maybe a plan to get a peptide lab to manufacture cheap Semaglutide is dangerous, even if you can't explain exactly why it's dangerous (in this case it's probably pretty obvious).

If, on the other hand, the conventional wisdom is "eat 6 - 11 servings of grain and 3 - 5 servings of vegetables a day", but many nutritionists are recommending less grain and there's new research out saying that much higher vegetable intake is good, maybe a plan to eat more vegetables and less bread is good.


> How about "don't go against conventional wisdom unless you have a good reason; the more conventional the wisdom, the better the reason needs to be"? Possibly combined with "be humble and give subject matter experts some credit,

I have a feeling that this is basically the generic talking point to use when your opponent is more radical than you. The opposite would be you accusing your opponent as luddites or whatever because they're too bought into "conventional wisdom". Neither are actually helpful epistemically because the line for "good reason" is entirely arbitrary, and is easily colored by your beliefs.

>an hour on Google Scholar doesn't mean you've learned everything".

>If the conventional wisdom is "don't order research chemicals from a lab in China then self-inject them", then maybe a plan to get a peptide lab to manufacture cheap Semaglutide is dangerous, even if you can't explain exactly why it's dangerous (in this case it's probably pretty obvious).

I think you're painting effective altruism with too broad a brush and giving them too little credit. I'm very skeptical that the typical effective altruist is ordering semaglutide from china or that the typical EA analysis on x-risk is based on "an hour on Google Scholar".

>If, on the other hand, the conventional wisdom is "eat 6 - 11 servings of grain and 3 - 5 servings of vegetables a day", but many nutritionists are recommending less grain and there's new research out saying that much higher vegetable intake is good, maybe a plan to eat more vegetables and less bread is good.

Hold on, all it takes to turn over "conventional wisdom" on nutrition is "many nutritionists" and "new research"? Does some well researched books like "The Precipice" or "What We Owe the Future" suffice here? I'm sure that among all the effective altruists out there, you can find among them "many" to support their claim?


> I have a feeling that this is basically the generic talking point to use when your opponent is more radical than you.

EA people would probably phrase it as something about how updating strong priors in response to weak evidence needs to happen slowly, but I feel the Bayesian formulation is a bit toothless when it comes to practical applications.

The broader point is that when your opponent is more radical than you on a factual issue[0], but they don't present any evidence for why, they're probably wrong. This isn't good enough in a debate but it's a fine heuristic for deciding whether to use opioids as performance enhancers.

> I think you're painting effective altruism with too broad a brush and giving them too little credit. I'm very skeptical that the typical effective altruist is ordering semaglutide from china

This is a fair criticism, but I didn't mean to apply it to the movement as a whole, only to the particular failure mode where some effective altruists (or more generally, rationalists) talk themselves into doing bizarre and harmful things that equivalently smart non-EAs would not. It's easy to talk about Chesterton's Fence but it's not so easy to remember it when you read about something cool on Wikipedia or HN.

> Hold on, all it takes to turn over "conventional wisdom" on nutrition is "many nutritionists" and "new research"?

I'm just looking for a heuristic that stops you doing weird rationalist stuff, not a population-wide set of dietary recommendations. It's okay if some low-risk experimentation slips through, even if it's not statistically rigorous and even if it's very slightly harmful.

The point is that there are two requirements being met: first, no strong expert consensus ("many nutritionists" was too weak a phrasing and I apologise), and second, if you ask a few random strangers (representing conventional wisdom) whether eating more vegetables and less bread is good for you they'll tell you to go for it if you want to, while if you ask about using non-prescription opioids they'll be against it.

[0] Values-driven stuff is different.


It's called science.


That's the same problem as before. Outside of maybe fundamentalist religious people who think their religious text is the final word on everything, everybody agrees that "science" is the best way of finding out the truth. The trouble is that they disagree on what counts as science (ie. which scientists/institutions/studies to trust). When the disagreement is at that level, casually invoking "science" misses the point entirely.


Even then, some fundamentalist have "science." Kent Ham being an example here.


Science doesn't care who does the science. It just takes time.


That might be true, but it's a non-sequitur because this thread is talking about the epistemic practices of a particular group. Whether "science" (the institution, method, or humanity in general) will eventually arrive at the truth is irrelevant.


Is it irrelevant? Did this group arrive at a scientifically validated set of conclusions? If not, move on.


A big problem is we have gone from Feynman saying "science is the disbelief in experts" to "What experts say is true right now is the science".


Not really. Saying that relying on experts isn't needed is a common, self-deprecating thing scientists like to say, but it doesn't really work. Even Feynman wrote about having to deal with cranks who sent him letters in which the authors thought they had disproved relativity or something. Everybody's opinion isn't equal in science.


I've never met anyone who didn't deceive themselves in significant ways.


>The worst thing about being smart is how easy it is to talk yourself into believing just about anything. After all, you make really good arguments.

>EA appeals to exactly that kind of really-smart-person who is perfectly capable of convincing themselves that they're always right about everything.

I think that just as big of an issue is that a lot of these EA people are not really smart. They are generally financially successful, which many people confuse with being really smart. This is especially true if that financial success came as a result of being really good at something (like programming), that doesn't necessarily translate into being really good at other things, or being really smart in general. This effect is compounded when other people are constantly stoking their egos due to the aforementioned professional and financial success. This is not to say that wealthy people who go into EA are stupid (they are very likely intelligent) but they just may not be as smart as they believe they are, or as smart as other people tell them they may be.


I am out of the loop a bit, so I'll say that first. An acquaintance introduced me to the phrase effective altruism and talked about sending money to a place that's having problems rather than going there yourself, and I just thought, yah, that's a more effective way of being altruistic and went on with my day.

I'm just learning that they made it into a cult. Who thought that was a good idea? Just be an effective altruist FFS.


> I'm just learning that they made it into a cult.

EA has a lot of overlap with religion by its very nature. Any moralizing philosophy with dedicated followers will tend to become cultish.


I mean, I agree, but it seems to me that the abstract concept of being altruistic effectively is not moralizing at all - it merely carries the assumption that the actor has taken a moral stance and leaves it at that.

I saw it more like "If you are going to attempt to make your actions line up with your morals or ethics as a matter of principle, you should do it in a way that actually produces the result your ethics demand."

I mean I'm not really arguing, I know people are gonna do what people do...


Mountains beyond mountains is a great book with an alternative view. Dr Paul is likely to do a good job with your money helping people because he's taking local patients and root causing their problems causing them to get sick, and solving those.

Somebody from on high isn't going to have the right perspective to do a good job. I think the effective altruists are solving a problem for themselves "how do I feel better about having all this money" and not "how do I solve this person's lack of ability to fill prescriptions problem"


> The worst thing about being smart is how easy it is to talk yourself into believing just about anything.

What about if you're less smart? Wouldn't you be more easily fooled?

Maybe it's not about smartness, but about arrogance?


> But I don't have any reason to trust them

My favorite aspect of EA is the transparency. Givewell and open phil publish white papers that go into excruciating detail about how they reached their conclusions. You don't have to trust them. Read what they publish and make your own conclusions about how to do good most effectively.

I've also enjoyed the writing of Julia Galef, especially The Scout Mindset, which many in the EA community have embraced as solutions to not only the "too smart for your own good" problem but general divisiveness/tribalism in society.


I’ve looked into EA / GiveWell charity recommendations, but then when I’ve independently looked into IRS Form 990 returns the charities they recommend often seem tiny and have spending of tens of thousands to hundreds of thousands per year.

It’s always seemed to me there is likely to be more oversight in larger organizations. I’ve been meaning to ask about this but never had time.

Also if you read their recommendations, a lot of times their source is “personal conversations with the leader of the charity”


I wouldn't say "really-smart-person" but reasonably smart person who believe they are way more intelligent than they actually are. People who mistake competence in one area as expertise in all areas.


aren't these all, at their root, postmodern re-implementations of religion?

with all the related problems & benefits.


Yes, this is perspicacious. If you reject the premises and practices of religion, you will eventually re-invent them all.

Disestablishmentarianism by modern democracies means that without a centralized religion to arbitrate Truth, pseudo-religions will expand their power bases to fill in the blanks.

Psychiatry, Social Security, shelters, soup kitchens, hospital and university systems. All were adequately handled by Christendom, until society decided that Christendom was superfluous and hindering their goals.


I make this comment as I observe many of my friends attach meaning to politics that people use to attach to religion, and I find it both ironic & troubling as it comes with all the problems of intolerance & lack of compromise, us vs them, etc.


What a whole bunch of bullshit.


"God is a dream of good government."


I am an EA (in the sense that I believe in the importance of some general principles of the movement, like trying to do the most good, and use data when available -- I don't participate in any organization directly).

What I think is the most promising feature of EA is openness to criticism and trying to improve the movement itself. I think SBF was a wake up call and many people in the movement should take note that this isn't how we get to a better future or do the most good (although I think his tactics also were not really EA canon, if there is one, and I don't know if people knew there could be outright fraud -- disclaimer, I don't know the details of SBF's case). That case highlighted the importance of following common sense ethics, honesty and so on.

I also think it's a very common issue for smart people to 'get over their heads' (is that a valid expression?), which I wrote about here: https://www.reddit.com/r/slatestarcodex/comments/yww9g6/be_s...

It's certainly dangerous to use reasoning to justify whatever you would like to be true. I believe this is addressed in Julia's book 'The Scout Mindset' (haven't read it, only a few reviews!), and it's actually one of the main points of the book (with several examples of that) -- a.k.a. motivated reasoning, arrogance, etc.. We can't be perfect, but we can learn from our mistakes and be aware of those dangers.

I hope the EA community will have a sincere look at those issues and address whatever needs to be addressed. I myself will continue to be an EA possibly forever, because the root ideas are very sound. I want to help people, and I want to learn how to best help others. You can't make me not want to help others effectively ;)


What’s wrong with fraud if it has positive expected utility? At least if you’re an avowed act utilitarian like SBF.


Even with a cold calculations of utility, fraud has the same effect as defection in the Prisoner's Dilemma, and reality is an IPD not a single round, and the winning strategy for that is tit-for-tat.


Second order effects; good luck raising money once your fraud is exposed.


Second and third order effects that result in living in a society that embraces fraud. If you're able to completely cover it up it's a different story, but that's a very dangerous bet to make.


Effective altruism is, I think, an ideology perfectly suited to ensnare a certain kind of person.

Conventional wisdom would say that wielding wealth and power like effective altruism demands requires humility, compassion, and maturity. It requires wisdom. Effective altruism can seem to remove the need for that. Doing good is about calculation, not compassion! Interpersonal failings don't matter if someone is really good with C++. One needn't care about the feelings of others if there are more efficient ways to use the time.

Effective altruism calls on the rich and capable to recognize their own power to help those who are poor and helpless. However, it is easy for pity to turn to contempt and for clarity of purpose to turn to arrogance. The poor, hungry, and sick of the world need the effective altruist for a savior. The effective altruist is better than the rest because they are making the world a better place.

An effective altruist may confuse immaturity with wisdom and greed with generosity.

This is not meant to be a diatribe. I find much of effective altruism obviously true and find my exposure to it has made me a better person. If pressed, I would probably call myself an effective altruist. Still, it is greatly concerning that people like Elon Musk or Sam Bankman-Fried can be associated with effective altruism without any real hypocrisy.


Yup.

Voltaire put it most succinctly:

"Those Who Can Make You Believe Absurdities, Can Make You Commit Atrocities"

And, as Feynman pointed out:

“The first principle is that you must not fool yourself and you are the easiest person to fool.”

So, yes, this works just as well recursively, i.e., if you are making yourself believe absurdities...


Let us take a moment and separate the idea from the community - most (I might potentially say all) communities fail to embody their own ideas. A fun one is giving any religious community 100 years and then checking in to see how they are going at avoiding scandals.

Effective altruism the ideal is very easy to defend - it is blindingly obvious that economic advancement has outperformed all attempts at charity by orders of magnitude. Asian countries keep using work-hard-and-save with jaw dropping outcomes. The success rate of capitalist entrepreneurs at driving social improvements dwarfs the attempts of charities too.

The Effective Altruist community however is probably going to devolve into something rather unimpressive. There isn't much difference between a high performing effective altruist and a successful capitalist, so the community is on shaky ground. They don't have obvious rituals or demographic pressures to keep the community congealed.

Although this headline confuses me. Don't all communities with men in them have a sexual harassment problem? I'm not recalling a community that I'm eligible to join that hasn't been labelled as having sexual harassment problems.


I guess this has to do with people trying for (in any sense) "higher standards" being perceived as hypocrites when they fail to meet them. Whenever you have a movement that claims to be "trying to make things better", you will be attacked for having any problems even if everyone else also has those problems.

In some sense, I think this idea is even valid. "You're trying to be better so why are you not better at this particular thing?" seems like a reasonable question. It's just sad when this is used as a general argument against trying to improve things.


The worst thing about "smart" is that it doesn't mean a damn thing because it conflates cleverness and wisdom, which are orthogonal.


> The worst thing about being smart is how easy it is to talk yourself into believing just about anything. After all, you make really good arguments.

How smart you are has nothing to do with it. If you're motivated enough to believe something that you shouldn't believe, you'll talk yourself into it. Smart people might be able to come up with an objectively better (or at least more convoluted) rationalization, but only because they'll require of themselves a better rationalization. Less-smart people won't be as capable of rationalizing, but they also won't be as discerning about the quality of the rationalization. Kind of like how people develop about the right amount of muscle to carry their own weight around, whether that be 90 or 180 lbs.

> EA appeals to exactly that kind of really-smart-person who is perfectly capable of convincing themselves that they're always right about everything.

This sounds negative until you realize this also applies to programming and litigation, chess and go, everything in academia, communism and anarcho-capitalism, etc. The fact that something appeals to know-it-alls says next to nothing about the quality of the thing. Chess isn't a worse game because it attracts insufferable braniacs and other pretentious people, it's just the community around it that suffers. But the communities around everything interesting suffer from those same sorts of people.

> And from there, you can justify all kinds of terrible things.

> Once that happens, it can easily spiral out from there. People who know perfectly well they're misbehaving will claim that they aren't, using the same arguments. It won't hold water, but now we're swamped, and the entire thing crumbles.

Once again, yes, people who want to justify terrible things can do so. It's not unique to tech bros. Fundamentalist christians, the homeless, and young activists of any alignment can do it too.

> ...in part because I know too many of the type of people who get involved and aren't trustworthy.

This is the meat; the only question that actually matters: Is it true that EA is disproportionately populated by the "untrustworthy"? If so, then there's a reason to stay away from it. This article's pair of anecdotes are worse than useless for arriving at the answer. All I know after reading it is that the TIMES is interested in taking EA down a peg. The evidence presented is utterly insufficient to support the claim that "EA has a sexual harassment problem [to an unusual--and therefor newsworthy--extent]".


It isn't about being smart. Anyone can convince themselves of anything, eg flat earthers.

>I already know that my money is more effective in the hands of a food bank than giving people food myself

In certain situations. Unless you're generally in favour of communism. Most people just skip out the barter entirely and give money, generally in exchange for things. It isn't clear to me why charity in general should be different.


This observation is correct, and it applies to any kind of consequentialis/utilitarian reasoning in general.


[flagged]


It's OK as an analogy but I don't think actual smart people are more likely to trick themselves.


The point of a scam is that you fall for it due to a moral failing not a lack of aptitude/stupidity. In a scam people use their own intellect to justify chasing after some appetite, so smarter people just convince themselves harder.

EA is a scam that uses "the ends justify the means" as the temptation, and that's why people are into it.


If there are less cases of abuse than among generic populace, it is weird to blame the movement. From the article, it is hard to understand that EA is somehow worse than most of the USA.

Also, arguments for polyamory are just that, arguments. You certainly can press someone into it, but from the article, the impression that it is more like persuasion.

Regarding cult dynamics - any tight knit community feels like this. Be it psychedelics users, health nuts, athletes etc etc. All of these will have takes that outsiders will consider unusual and weird.


> but from the article, the impression that it is more like persuasion.

Okay -- but if you showed up to a tech conference, and someone in the hallway was trying to "persuade" you to join a threesome, would you feel that was appropriate for the setting? The issue is that so much of this relationship-and-sex talk is happening to people who didn't think they had signed up for it. That's where you start verging on abuse.

I think part of the issue is that EA is:

- kind of life-defining by nature

- filled with people who seek self-improvement

- filled with people who are excellent persuaders

With that mix, it is uncomfortably easy to start "mixing business with pleasure," so to speak. People in that environment think that they live a really interesting and good life, and want to convince others of that fact.

That's why people are blaming the movement, I think.


Having been part of the rat/EA community in the bay area for over a decade I can 100% confirm that the incidence of weird sex stuff happening is far far above of the baseline I've experienced with other groups of people, and i tend to keep odd company.

Not some sort of scientific study but also the only set of people where I've been casually asked if I wanted a prostate massage at a house party. The norms around here just hit different.

I'm not mad about it, and I think I've personally never felt harassed and in general there seems to be a very explicit consent culture, but also weird sex stuff is so normalized that it must be easier for bad actors, especially clever ones to get away with abusive behavior.


Off topic: Your photogrammetry project is really cool!


> arguments for polyamory are just that, arguments

Which have nothing whatever to do with effective altruism. So it's perfectly reasonable for a person who came to a gathering expecting to talk about effective altruism, to be uncomfortable, to say the least, when she finds herself getting proselytized about polyamory.


It’s never who you want to be polyamorous who’s polyamorous.


Correction: The people you want to be polyamorous don’t need to go around recruiting partners.


I agree in principle, but isn't it like this often for any activity?

People talk, sometimes about sensitive or offensive topics. RN, i encounter people around me that have very strong opinions about war in Ukraine. Sometimes it comes to direct insults, and while this is unfortunate, i won't say some group is responsible. Specific person is responsible for their actions.


Arguments for polyamory are also regarded as grossly unprofessional in any environment that's focused on a specific goal. Most people don't want to be goaded by strangers into "arguing" about their relationship status. It's abusive.


Yeah but the woman interviewed wasn’t saying men in Sam Francisco kept asking her to be in their polycule she said men in EA groups kept asking her to be in their polycule

I mean nobody has ever asked me to be in their polycule or anyone else I know so multiple times per year from people in these EA discussion groups is statistically significant and is pretty creepy


> You certainly can press someone into it

You’re not supposed to persuade people into any arrangement that lets you access their bodies unless they explicitly indicated that they are open to being persuaded, and especially not when they are being flown over for what’s supposed to be a job interview, or any form of meeting up with the promise of an academic or a professional collaboration.

> but from the article, the impression that it is more like persuasion

Sorry but you don’t know that, and I’m sure I’m not the only one who does not get that impression either.

There are enough comments in this entire thread about how people believe things only to justify the terrible things that they do.


> If there are less cases of abuse than among generic populace

Where are you getting this?


They might be saying that this is a series of anecdotes, not a scientific study, so we don’t know.


Does anyone think there are teams of scientists going around documenting the number of sexual abuse cases in every community? Pretty much what we get are anecdotes. And the correct comparison here is not “society at large” but rather, comparable professional organizations or interest groups.


Of course these dipshits needed to be polyamorous their little community was like 70% dudes, their clearly wasn't enough pussy to go around. Of course, this would cause more problems than it solved - having been in a poly relationship before I can tell you firsthand that the number of issues in the relationship scale n! for n people involved in the relationship.


> EA is diffuse and deliberately amorphous; anybody who wants to can call themselves an EA... But with no official leadership structure, no roster of who is and isn’t in the movement, and no formal process for dealing with complaints, Wise argues, it’s hard to gauge how common such issues are within EA compared to broader society.

This passage reminded me of this article: https://www.jofreeman.com/joreen/tyranny.htm

Moral of the story: be weary of groups with low accountability and vague power structures. In a vacuum, power structures will always emerge, so it's generally better for them to exist in the light than in the dark.


I think it's bizarre EA seems to be a movement with power structures. I always just thought EA was a philosophy and based on that I felt it was an interesting idea. I don't have to worry about sexual harassment when I'm considering Plato or Stoicism. Why is it a thing with EA?


Many industry events have had problems with sexual harassment. Young people living together in group houses (for example, fraternities) often have problems too.

So the problem in this case seems to be for young people who want to connect at in-person events. If you never go to events, don't want an EA job, and don't want to live with other EA people, then I don't think you can be affected?

So I'm not sure it's a problem with the movement as a philosophy, as with holding lots of loosely moderated events, having parties, having group houses, and so on. That is, this pretty intense socializing seems high risk for this sort of thing.

What other institution does this remind me of? College. College sex scandals tend not to make the philosophy department look bad unless a teacher is involved, but that certainly happens.


> holding lots of loosely moderated events, having parties, having group houses, and so on.

Ah. I wonder if the nub of this is group houses?

I can't tell whether the abuse phenomenon is characteristic of EA, or of San Francisco. I understand housing in SF is absurdly expensive, even for well-paid people, so perhaps group houses are more prevalent in SF. Is abuse more common in EA-affiliated group houses in other parts of the country?

My supposition is that abuse is more likely in a group house, whether it's connected with EA or not; but I have no facts to back that up.


Something can both describe a philosophy and a movement. The movement always has hierarchies and power structures. The philosophy doesn't, but then again, it's often presented by the movement so the lines get blurry.


This is insightful, thanks.


When there is a lot of money moving around, it seems inevitable that power structures will form around it.


That might be true, but there isn't exactly an "Effective Altruism Foundation" that all the donations funnel through. You basically have a bunch of random people that are donating to charities that are well regarded, with a few philanthropists here and there setting up foundations. You might be able to hijack organizations like givewell (ie. organizations that tell people which charities they should donate to), but trying to monetize that is tricky. At the end of the day you don't really control anything, because you're still reliant on individuals following your advice. So if you try to funnel money to your own foundation (for embezzling purposes), you will easily burn any goodwill that you've built up.


The Centre for Effective Altruism has a turnover in the tens of millions. Surely there are people with greater and lesser decision making abilities within that institution?


>The Centre for Effective Altruism has a turnover in the tens of millions

Google says that their annual budget is $6M, so I think you're overestimating how much money can be siphoned off. Moreover, given how trendy effective altruism is with young professionals from elite universities, getting a high position will be difficult. Sure, an ivy league graduate could fight tooth and nail to get a position at the CEA that pays a meager base salary and hope to siphon off some extra money, but he can make much more money at a professional services firm or in finance. Better yet, he can get some high ranking position at some private company that has 10x turnover and doesn't have public scrutiny (because it's not a charity). That's not to say that everyone at CEA is behaving scrupulously. It's just that getting into effective altruism to embezzle money makes little economic sense.


I think the sums are higher when you consider additional one-off donations. CEA recently bought an estate at Wytham Abbey, which would almost certainly have cost more than their yearly budget. Apparently this was funded by a one-time donation by another EA organization. Similarly, the EA-adjacent ESPR bought a $5m chateau in the Czech Republic. These are major chunks of money to throw around, especially when you factor in maintenance costs, and I have no doubt that the ability to control access to these amazing facilities gives you a lot of influence in the community.


The castle was apparently bought by facebook founder dustin moskovitz. I certainly think the CEA is misguided, but there main purpose is basically having conventions/recruitment events and if they think they'll be spending considerable sums on convention space related to that mission I can see how it makes sense to just buy a property and hold all the events there.


My understanding is that the UK castle was funded by a directed donation from another EA organization that had close ties to CEA. As far as I know, Moscovitz did not direct that organization to buy a castle. Instead a group of tightly-connected colleagues/friends in two EA organizations made the determination, without any (public) cost/benefit analysis or buy-in from the community. This is the precisely the sort of thing you’d expect to happen in a community with large amounts of money and very little accountability, which is the allegation made higher in this thread.

ETA If I’m wrong and you can point me to a detailed cost/benefit analysis, I’ll gladly withdraw my criticism.


Dustin posts a lot on the ea meme Facebook group(no clue how he finds the time while being the ceo of asana) and admitted to being the buyer there. Could’ve been memeing, but he seemed serious.

I think CEA has made some poor choices, and SBFs future fund even more so. But if it’s true that the money used for this property was specifically earmarked for purchase of a property by a single donor as the forum post claims I can see how it makes sense. https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/...


I searched the link you provided for anything that claimed Dustin Moskovitz requested that purchase, but I didn't find it. I did however find this detailed explanation of the grant-making process [1] by Claire Zabel of Open Philanthropy, and it seems pretty clear that the request to purchase Wytham Abbey came from Owen (Cotton-Barratt, I think?) of CEA (now known as Effective Ventures.)

Claire provides her justification for granting the funds to purchase the property, and it is not particularly compelling. (She even concludes that she would not have made the grant if she had a chance to do it over today.) There certainly doesn't appear to have been anything approaching due diligence about the cost-effectiveness of this particular purchase, which is really surprising given that effective use of resources is the core premise of EA.

To make things worse, the commenters point out that Claire Zabel is also on the board of Effective Ventures (formerly CEA), which makes this a much worse conflict of interest. It's hard to look at these organizations as anything more than a tightly-knit group of friends passing donor money around between them.

PS If CEA and OP are just fronts for Dustin Moscovitz, it's totally fine for them to spend money on whatever they want: as long as he's cool with it. I had the impression these organizations were part of a broader community promoting the principles of Effective Altruism as a movement, and the community would hold them to those principles. It is extremely difficult to look at the details of this episode and believe that's happening.

[1] https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/...


Open Phil is literally just Dustin’s foundation. He’s also one of the main funders of givewell’s research work and I am not a fan of how incestuous a lot of the orgs are.

The point of the link was the explanation the guy at CEA who ran the project gave. I agree it’s not good enough and have come to conclusion that this is just something xx Dustin thought would be cool, which is not very EA, but I guess his prerogative.


The contention made (far!) up-thread was "When there is a lot of money moving around, it seems inevitable that power structures will form around it."

Then somebody else said that CEA's budget was only $6m, so how bad could things get? The Wytham Abbey example was brought up just to show how much more money CEA could tap into, through its connections with other EA orgs.

But from my perspective, the EA "castle adventure" is also an excellent illustration of those power structures. Here we see a small number of people (friends, colleagues, fellow board members) take control of core EA institutions (using enormous flows of donor cash) with very little pushback from the community. And worse, they are using this money for purposes that are completely at odds with the stated principles of the EA movement.

As an outsider, if a few people with access to cash are so easily able to capture the most prominent orgs in the EA movement and make them ineffective, then that's pretty terrible for EA as a brand. This doesn't mean I'm opposed to the broader concept of "giving money effectively", but I'm definitely going to feel an aversion to anything that carries the EA label.


Perhaps I'm misled, but Wikipedia says the annual budget for 2021 was $28 million:

https://en.m.wikipedia.org/wiki/Centre_for_Effective_Altruis...

I see the conversation has gotten ahead of me, but the Wytham Abbey purchase is absolutely an example of my concerns. The justification wasn't much more than "I like going to conferences in big posh buildings and some other people I've talked to do as well".


Makes me think of Parkinson's law: "work expands to fill the time allotted for its completion" -- but this time, "money spent expands to match the amount of donations".

I wonder how to design an organization, where people don't do that. Seems it's hard.

Maybe new metrics: Money-not-spent, and Time-we-didn't-need-to-use. But can be easily gamed, hmm


The EA "philosophy" is strongly tied up in libertarian utilitarian ways of thinking, and such people are able to talk themselves into believing that it's rational to defer to people smarter/richer than themselves. They get money, intelligence and virtue all mixed up and confused with each other. Being intelligent gets you money, money buys virtue, those who are smartest will become richest and those who are richest will be able to buy the most virtue.

Power structures emerge naturally from this.


This is an aside, but although I agree that groups without formal power structures can hide real ones, I'm not sure explicit hierarchies are necessarily better. In my experience, they can be used to legitimize shadow hierarchies or corruption, which sometimes makes the problems worse. Those vague power structures exist with or without formal ones; when they coincide it's good, but when they don't, it can perpetuate or reinforce problems more than they might otherwise.

I'm not trying to defend anything about EA, though. It's always seemed somewhat suspicious to me, and there's probably a lot of ways in which it could be used as an example of phenomena that occur more broadly in society.


> In my experience, they can be used to legitimize shadow hierarchies or corruption, which sometimes makes the problems worse.

At least in your day-to-day formal hierarchies, those who are negatively affected by the shadow hierarchy don't have anything to lose by acknowledging that it is indeed a power structure. If Alice is the boss but Bob is the one really running all the things, none of Alice's employees are going to lose any sleep by acknowledging the truth of the situation.

But in communities that claim to be non-hierarchical, coming to terms with the existence of a shadow hierarchy could constitute an existential crisis. This isn't a logical necessity-- e.g., members could simply notice and just shrug it off. But most groups I've come into contact with that claim to be non-hierarchical assign great positive value to it, and they get defensive or squirrely about any attempts to uncover hidden power structures within.


> I'm not sure explicit hierarchies are necessarily better.

It's the explicitness that is the good part, not the hierarchy. The premise is that there will always be hierarchy; groups that profess to be non-hierarchical have a hidden hierarchy that is more pernicious.

So it's not like "Oh, this group has no hierarchy, so lets invent one and write it down", it's more "This group appears to have no hierarchy; so we need to do some digging, to expose the hierarchy".

If you join a non-hierarchical group, it can take years to discover that it really does have a hierarchy, and more years to learn how it works. Hidden power is more dangerous than overt power.


I see that essay linked every six months or so, and I swear every time I read it, a new element of it rings true to me. Really timeless, invaluable writing on the way groups of humans work.


Back in my day effective altruism was mainly about finding charities that aren't essentially scams (way harder than it looks). Scene has apparently moved on to other things since I followed it a decade or so ago.


This is a super-simplified summary, but I think it's generally accurate.

The basic thesis of EA is "it is your duty to improve/save as many human lives as you can."

At some point, a lot of EAs realized that there were many more future humans than present humans.

Once you widen your scope in that way, you start realizing that long-term, catastrophic risks (climate change, nuclear disaster, misaligned AI) would affect a lot more human lives -- billions or trillions more -- than basically anything we could do today.

So the logic becomes -- why would I spend time/money on mosquito nets when we need to be securing the literal future of the human race?


The expansion of EA to from eliminating malaria to interplanetary exploration was a pure grift. Once EA organizations started to pull in serious money and started to purchase palatial estates in Oxford[1] you had to know the jig was up.

[1] https://en.wikipedia.org/wiki/Wytham_Abbey#/media/File:Wytha...


Because we do that with mosquito nets.

EA seems like a way to achieve nothing while looking like you're doing everything. No one expects you to fly to Mars tomorrow. And that's true every single day. It's true today. It'll be true tomorrow. It was true yesterday. It was true 10 years ago. It will be true 10 years from now.

So if no one really expects you to fully achieve your goal, all you have to do is kinda look like you're trying and that will be good enough for most people.

EA takes a good, hard look at all these good intentions and says, "Fuck, this would make a baller ass road".

However, if we solve malaria. That's another thing not killing us. Another problem checked off. Like polio. Or smallpox. Colonize Mars? Fucking how? We can't even get the environment on Earth under control. How the living fuck are we going to create an environment on another fucking planet. Much less even get there.

So how about we figure out a way to get the garbage out the ocean. On how to scrub the air of CO2. How to manufacture and produce without so many polluting side effects. We keep doing all these smaller things. Put in the work, and one day, we will save all those trillions of potential lives. But it requires putting in the work.

Edit: Not saying you believe it. But presenting the counter-argument to EA.


Where does this view of EA come from? Hatchet jobs written by TIMES and its ilk? Twitter personalities like Elon Musk who have so much social gravity that people perceive them as the spokesmen of anything they mention that you hadn't heard of before?

Google "effective altruism" and the first two results are EA/Giving What We Can and GiveWell. Both of these organizations are meta-charities that help forward money or encourage the forwarding of money to other charities, but most of all... Mosquito nets! The first charitable fund mentioned by EA/Giving What We Can is GiveWell's, and the top recipient of that fund is the Malaria Consortium.

I heartily encourage you to read about GiveWell. It's still the heart of EA from the perspective of the less-vocal majority of self-described EAs.


I think “where does this view come from?!” outrage comes off as disingenuous. I think we both know that over the past couple of years the most prominent public “face” of the EA community has been William MacAskill, who went on a major donor-funded press tour to promote his ideas on longtermism through his book “What We Owe the Future.” For most of the general public, this was probably their first encounter with the entire concept of EA.

It is perfectly fine if you don’t support MacAskill’s vision for EA’s future. I would love to hear a critique of this schism from someone within the EA community! But when you imply that critics are getting their (accurate) impression of EA from “newspaper hatchet jobs”, it feels like you’re either unaware of the way some prominent EAs are presenting the movement, or else you’re not arguing in good faith.


So you feel as though William MacAskill has been the "public face" of EA for the past couple years. That's possible, though it would make a little more sense if you had said one quarter of that time period, since his book was released in August.

I'd normally not want to get into personal accusations, but since you've already started your reply with one sentence ending in "disingenuous" and the next starting with "I think we both know" (which is infuriating), and to round it all out ended your comment with "...or else you're not arguing in good faith", I'll say it: I think you're projecting your personal Internet experience on others, and I think your personal Internet experience does not reflect that of the median person. MacAskill is not the face of EA. I think if you look at search data, you'll find that Peter Singer's popularity merely went from being ~100x MacAskill's to more like 10x during the book tour.

EA predates the notions in What We Owe the Future by many years. Present-focused charities like GiveWell were perhaps overshadowed in popularity by that book for a news cycle or two in late 2022. It happens. But the notion that that book or its author have been in any way the "most prominent" aspect of EA for the last couple of years is completely false. It's projection. In your mind, that's all EA is lately so it must be all it has ever been (hence the exaggerated timeline) and everybody else is just like you.


Here is what you said to the other poster: “Where does this view of EA come from? Hatchet jobs written by TIMES and its ilk? Twitter personalities like Elon Musk who have so much social gravity that people perceive them as the spokesmen of anything they mention that you hadn't heard of before?”

Having re-read this, it just strikes me as extremely disingenuous and uncharitable (not to mention aggressive) particularly since you seem to know that there has been a huge amount of press around EA recently due to the MacAskill longtermism book, not to mention all the press around SBF and his longtermist fund.


No! Gah! You're doing it again! I'm not being aggressive, I'm being harassed by somebody who keeps telling me what I "know"! Why do you insist on talking to me like this instead of just taking me at my word?

> ...you seem to know that there has been a huge amount of press around EA recently due to the MacAskill longtermism book...

I do not know this! I think I saw one article about it posted here on HN. I also might have read a post on somebody's substack about it a few months ago. I am not aware of any "huge amount of press", certainly not "recently". I looked into the search stats on it because of your comment. I didn't even remember the name MacAskill or much about his book before you brought it up. EA to me is still basically just malaria nets and other present-focused causes, and I can only assume that's what it is to most of the many, many people who have read Singer and not MacAskill and who donate to GiveWell year after year.


> Where does this view of EA come from?

I''ll answer. I think the view comes from a different, but related group known as "The rationalists".

"The rationalists", or less wrongers, fit the bill to a Tee, of all these common criticisms of EA that people are bringing up.

And, the reason why this criticism may be misattribute to EA specifically, is because there is a large overlap between the rationalists, and EA.

The Rationalists are the ones talking about AI existential risk, and colonizing mars, and all that nonsense.


Although the thought leaders are focused on long termism most EA money still flows through givewell to mostly global health initiatives.


I find it odd that longtermists dont see that the obvious solution to long term issues is throwing more brains at the problems which implies doing good now.

Maybe i am missing part of the argument?


I'm assuming you mean that we should be trying to bring as many people as possible out of poverty and get them good educations (in which case, I wholeheartedly agree).

When you've built a lot of your identity on the idea that you are one of the smartest people in the room, it can be very hard to accept proposals that would challenge that. This would do so in 2 ways:

1) "Creating" more smart people means more competition for them—possibly even people who would end up being smarter than they are.

2) For a lot of them (at least from what I've come to understand), part of the "proof" that they are very intelligent is that they are very wealthy. If you start pushing the idea that intelligence doesn't automatically show itself and lead to wealth too hard, it's not very many steps from there to disproving the idea that wealth implies intelligence...and then how can they be so sure they're that intelligent after all?


> in which case, I wholeheartedly agree

That is exactly what I had in mind as I was writing it.

As for the rest, I haven't met such people as my circle is mostly academic and Scandinavian, however, evidence supports it. Evidence being how certain very prominent figures seem to enjoy the company of sycophants and yes-people.

It's a shame really.


The existential risk stuff was baked into the Effective Altruism movement from the beginning, founder William MacAskill was a student of co-founder Toby Orb who in turn was a student of Nick Bostrom, who established x-risk research as a field in academia. Orb and Bostrom both now work at Future of Humanity institute, an institution dedicated to the species' long term future, and it's mostly concerned with x-risk research. Both Ord and Bostrom are frequently cited in EA writings, with book clubs being organized around Ord's doomsday warning for popular reading The Precipice. Bostrom and Eliezer Yudkowsky knew each other from their early transhumanism roots, and some of the early EA community organizing was done through his LessWrong forum. That and the establishment and funding of both EA and Rationalist orgs in Berkeley by philanthropists associated with both causes, primarily Skype co-founder Jaan Tallinn, seems to explain the overlap between those communities (fun fact: the Effective Altruism online forum runs the same custom software as LessWrong). And of course Eliezer has dedicated his career to funding AI x-risk institutions.


> basic thesis of EA is "it is your duty to improve/save as many human lives as you can."

No, it's not about having any duty. (Where did you get that from? Friendly question)

Instead: If you want to help others, then, you can to stop and think, do some research (or read others') before deciding where to spend your time and money.

Something like that.

Still, a movement formed around a concept that involves money (donations, lots of them) and status (getting appreciation for helping others), is going to attract some of the wrong people. So maybe it's unsurprising that from time to time, we're reading negative things about movements formed around EA-the-concept. Although the concept itself is neither good or bad (well except for in some people's opinions).


I forget where I heard it, but one of the issues brought up was that EA never really formalized the 'value' of a life relative to the 'value' of other stuff.

Like, there exists some value to the entirety of the Amazon that is higher than that of a human life. Otherwise the terrible logic says that you should devastate the Amazon to just build slum housing and take away birth controls. I'm not arguing for any of this, just stating the premises.

I think we can all agree that the 'value' of the whole Amazon isn't worth bulldozing for slum housing.

So the problem is where you put these fuzzy lines. You've got some extremes that 99.9% of people agree on, where is the middle? Where do you put down a line?

From the VERY little I've read of the EA debates, there seems to be no real work on this? If someone else could synthesize this as a reply, I'd be quite grateful.


At some point you've just got to decide for yourself. A lot of focus on human lives and especially qualys because they can be thought of as interchangeable but that hasn't stopped some EAs from focusing on animal welfare and other non humanist causes. There's no objective way to value rainforest or animal suffering in terms of qualys, all you can do is thought experiments and read studies on effectiveness of charities focusing on each so that you can decide which is more effective given your ethical framework.


I guess GiveWell still does that, but I'm not really sure what everyone else in the movement does.


Dustin Moskovitz's Good Ventures also seems to focus on more "prosaic" issues like medical research/direct cash transfers/animal welfare.


Givewell is certainly my go to.


Looks good I'll check it out.


It's unclear if the issue is EA or how to handle misbehavior in organizations without formal structure or hierarchy. It isn't like a workplace, with reasonably well-defined boundaries, but something more akin to religion, where its influence bleeds over heavily into many aspects of ones life. As such, it is probably both more devastating when one is the victim of misconduct and also more difficult to police such misconduct. I am not really sure what the answer here is. "Believe all women" is a great slogan, but I am not a fan of a "guilty until proven innocent" (and I say this as a woman). OTOH, this isn't a criminal procedure and as such, one shouldn't have to prove beyond a reasonable doubt that someone is preying on others to enforce some level of punishment. It's a tough problem.


You should be able to punish people even though there's reasonable doubt that they are culpable? Are you arguing for a "balance of probabilities" standard? Or that it's worth punishing some innocents so that the guilty are also punished?


The standard of proof required to ban someone from a once-a-month pub meetup is far lower than the standard required to, say, give someone the death penalty.


I think I am arguing for a "balance of probabilities". If (to spout off random hypothetical) the punishment is something like a banning of someone from EA conferences, then there definitely needs to be evidence of their misconduct, but that level of evidence doesn't need to be the same as if they are looking at a criminal conviction. The point is balancing the need to protect the victim while not punishing the innocent is a difficult issue outside the criminal courtroom.


For anyone that actually read the article, idk how much criticism there is for EA, but rather seemed more like a expose on how the Bay Area EA culture is heavily polyamorous with men abusing job connections to pressure women. Seems more like a Bay Area thing than an EA thing. I’m not sure though as I am not involved in any of these…


This read heavily like any of the polyamorous circles that I’ve been in. The poor handles around consent and the desire to maximize their reputation months after such incidents is… well, strikingly familiar to me.

I never have been poly and never will be. I wish I didn’t know anyone who was poly but they just find their ways into my circles due to poly being common with some hobbies I have.


I find that the following is true: those who are very vocal about their own virtue/altruism are usually rotten.

Those who aren't, often surprise you with their goodness when it matters.

I've found the "I am so good/empathetic/etc" crowd to be either self deluded and useless, or manipulative, specifically for the purpose of trying to work your way in sexually (as in this story)


You've done a good job of describing a feeling I've had for years now. It really is just such a turn off when anyone is super vocal about how good of a person they are. It's like a code smell.


EA can be the scum of Earth and I wouldn’t be surprised that this kind of group could be as it sounds cultish and self-aggrandizing. But conflating their wrongdoings right away with the refusal of monogamy and the practice of polyamory is so annoying and conservative and plainly wrong. It also implies that these practices are one step removed from harassment, which is infuriating. It’s exactly like conflating homosexuality and pedophilia, just wrong and violent. Shame on TIME for using this story to immediately push their conservative agenda.


In a perfect world I agree with you, but at some point you have to ask yourself why non-monogamy keeps going out of style after periods of acceptance. Even with a religion justifying it, Mormons gave up polygamy!

I think it's because polygamous relationships lend themselves to manipulative and abusive behavior more easily than monogamous relationships do. Over time that increase in probability of negative outcomes turns into social change and prescription against it.


Mormons gave it up because the US Government forced them to. It was a requirement for statehood for Utah.


Perhaps. I'm skeptical, but I have neither data nor anecdata to support that skepticism, so I won't actually dispute it.

But the problem there is still the manipulative and abusive behavior, not the polygamy. It's absurd to argue against healthy polygamous relationships just because there are unhealthy ones.


A culture where men can have many partners but women aren’t allowed to is inherently ripe for abuse. This is completely different from polyamory.


In fact male homosexuals are about 1̶0̶x̶ 2x more likely to be pedophiles than heterosexuals.

https://pubmed.ncbi.nlm.nih.gov/1556756/


> In fact male homosexuals are about 10x more likely to be pedophiles than heterosexuals.

That's not what your source says, it says that the heterosexual:homosexual ratio is 11:1 among those it found to be pedophiles, which is lower than the 20:1 ratio in the general population, implying a higher (but less than 2×, not about 10×) likelihood of pedophiles among homosexuals than heterosexuals.


Fair enough, will edit parent.


You can see the EA forums response to this post here: https://forum.effectivealtruism.org/posts/JCyX29F77Jak5gbwq/...


This is not about effective altruism as in the generic practice of high-impact giving, it relates to a highly specific subculture of EA proponents. "Polycules", 'nuff said. These are not appropriate topics in a professional discussion among strangers.


It's a mission-focused group, and such groups usually include some amount of socializing. It's not the same as a workplace (although people frequently mention their partners in a workplace also)

There are certainly polyamorous people who behave poorly, but that doesn't mean people with multiple lovers should be held to different standards than monogamous people are, just because their romantic orientation places them in the minority.


Sounds a lot like the "Peace & Love" movement of the sixties (hippies).

A lot of what went on, in those days, would be considered rape, slavery, various types of coercion and larceny, etc., these days.


TIL a woman even committed suicide as a result of her experiences with sexual harassment in the Effective Altruism groups she was on, according to her suicide note. [1]

[1] Kathy Forth's suicide note: https://medium.com/@itai.ilyich/if-i-cant-have-me-no-one-can...


Sexual harassment is one facet of a what seems to be a power problem. The power in the group derives from reputation and many are willing to go to extreme lengths to scavenge as much of it as they can.

How many times are we going to repeat Zimbardo's experiment?

Let's not get bogged down with the details of EA, which on paper seems well intentioned. It could be a book club where a sexual harassment problem emerges, if there are power discrepancies within the group.

We also need to stop acting surprised whenever there are influential and powerful people in such a group (SBF was mentioned) and something like this emerges. The powerful within the group are effectively laundering money in exchange for sexual capital. Hookup culture, which has a wider audience than many would expect, would cynically view a group like EA as a lead generator for sex. It works until at least one person starts to call foul and the shaky foundation upon which the house of cards is built is exposed.

Come to think of it, in the current zeitgeist, if you're being Machiavellian about this sort of thing, then it might be a good strategy to quietly endure the shenanigans and collect information on such a group in order to gain leverage. The payout can be substantial if it captures enough media attention.


The article links to this EA community post on 'interpersonal harm' in the community, which I found interesting. https://forum.effectivealtruism.org/posts/NbkxLDECvdGuB95gW/...

One item caught my eye: >>There are also cases where I find a report to be alarming and would like to take action, but the person reporting does not want it known that they spoke up. In these cases, sometimes there’s very little that can be done without breaking their confidentiality.

Yikes. That would not meet reporting standards at most organizations I've been a part of. There are a lot of hard lessons behind requiring mandatory investigation of credible claims.


If you don't respect the desired confidentiality, you'll get fewer people confiding with you.

There are similar practices regarding rape - victims can seek e.g. securing the biological evidence without automatically triggering a criminal investigation. The logic being that this (critical) step should be as risk-free / barrier-free as possible.

> There are a lot of hard lessons behind requiring mandatory investigation of credible claims.

Worth keeping in mind that companies are always primarily trying to cover their backs and these policies often reflect that.


>> There are a lot of hard lessons behind requiring mandatory investigation of credible claims.

>Worth keeping in mind that companies are always primarily trying to cover their backs and these policies often reflect that.

Your comment was a fair one when it comes to corporations, but I was thinking of churches and community groups more like EA than corporations. Bad actors thrive in secrecy, particularly when they can use their power to create repercussions for people reporting bad acts, creating a culture of silence. Mandatory investigation is one of the few effective ways to resolve that. That's why it's the policy and/or law in many circumstances.

If my group had the same policy as EA, I would be very uncomfortable with it.


> men at informal EA gatherings tried to convince her

A lot of people are focusing on the polyamory. While that may have been something that offended people or made them uncomfortable, that is not the issue.

The key issues here are around the words "informal" and "convince".

I have non-traditional views about human sexuality. Even if I had traditional views, I spend a lot of time in different cultures where the traditional views can vary quite a bit.

I also strongly believe in talking about ideas, challenging viewpoints, and being non-judgmental and open to new experiences. I talk openly about my views and experiences. Sometimes, I extend invitations to people with different values than me participate in sexual interactions.

Yet, as much as I believe in being open about my views, there are clear boundaries.

On formality: One boundary is that I never talk about sex or sexuality with professional colleagues, or when I am in a position of power. Simply put, some people are not comfortable talking about these things, and if they are not in a position where they feel they can say no or avoid it, talking about it is not ok. It is harassment to talk about sexual topics at a formal gathering. It may be ok to talk about sexual topics at an informal gathering, but if professional colleagues are there, it's wise to approach such topics cautiously or best not at all.

On convincing: it's always a good idea to accept a no gracefully and respectfully. If someone declines an invitation, don't try to convince them. If someone says they are offended by your views, don't try to change their mind. If someone seems hesitant to talk about a topic, don't continue. On the other hand, if all of the participants of a conversation are eagerly engaged and asking about different viewpoints, then it is ok to try to convince others of your viewpoint.

I think frequent problems in tech and these problems in EA arise not because people have non-traditional views or are open about sexuality. The problems happen because people in these groups tend to be a little bit on a spectrum and unaware of power dynamics, people's comfort level, and how those things can affect people's expressions or lack of expressions about consent.

Some simple rules for those in doubt:

1) Never talk about sex with professional colleagues 2) Never try to convince others of your sexual views unless they ask for your opinion 3) When in doubt, don't talk about sex

I am a firm believer in talking openly and without judgment about sexuality. But in order to do so safely, full awareness and respect for these boundaries are key.


I’m just scrolling here astounded that people have difficulty understanding that you’re not supposed to talk about sex when the context of a gathering is academic or professional, and when no one asked you to.


> If someone seems hesitant to talk about a topic, don't continue.

I wish more people followed this advice in general.

One game some people play (which I really hate) is when they say: "I am not asking you about $TOPIC, I am just trying to understand why you do not want to talk about $TOPIC".

Talking about why you do not want to talk about something is still talking about it.

And sometimes I am simply not interested in something or thinking about it any further.


This is a problem as old as human beings.

People want sex. Sex happens most for those who are considered most socially valuable (for men anyway).

People look for socially popular things, select those that are accessible and adopt them to signal virtue and social value, to try and increase the likelihood they will appear as more sexually valuable.

However, this is risky. Discovery of deception of social value causes a much larger loss of social value than was gained from the deception in the first place, this is because it's commonly gamed for this very reason.

People who are more interested in things than people (mostly men) really struggle to grasp these interactive social properties in real time, and will often even be so unaware that they will not only openly admit to being deceptive, but get actually angry with the victim when the deception doesn't work.

These are called closed contracts, so when losers give gifts to someone in an unprompted fashion, this is not because they want to be altruistic, but because they think the altrusim can be used to present themselves as more sexually attractive. Resorting to trickery out of desperation.

These people are very common, and can often saturate movements like this.


As long as the person has a nice appearance no one will worry about sincerity of intentions.


Not to say that it isn't a problem or that the community shouldn't do better, but this seems like the sort of stuff you could dredge up about pretty much any group of comparable size. I'm intensely curious about the decision-making process that got this article published.


Seems like the issue here is not EA or a rationalist world view. The issue seems to be the lack of compartmentalization of the professional and personal lives, a lack of procedural boundaries around social conduct, and a general lack of maturity. This doesn't seem particularly rational...


"Effective Altruism" is one of those things like "All Lives Matter" where what it says on the box is not reflective of the way that people who identify with the ideology practice it.


The best way to fleece a flock is to wrap yourself in a cloak of good intentions. You see it in EA, in churches, in Black Lives Matter. Basically as soon as someone proclaims themselves as beyond criticism because of their superior moral position, start checking for your wallet.


So it is a doublespeak?


With the apparent link between tech and EA, I think the article is correct in saying that this harrassment is a reflection of the same problem in broader tech circles. We still have a lot of work to do if we want everyone to feel comfortable in our spaces


Why is Time magazine printing the name as uncapitalized "effective altruism"?

Those two terms have important generic meanings, and this Effective Altruism thing doesn't appear to be some generic natural combination of the terms.


>Thousands have signed a pledge to tithe at least 10% of their income to high-impact charities. From college campuses to Silicon Valley startups, adherents are drawn to the moral clarity of a philosophy dedicated to using data and reason to shape a better future for humanity. Effective altruism has become something of a secular religion for the young and elite.


I was about to say, tithing 10% sounds like church. Heh the Catholic Church will send you a bill based on your taxes.


In some countries the government will even collect.


"The Bay Area rationalist community can not escape sexual misconduct and drama."

https://www.reddit.com/r/SneerClub/comments/avu24t/the_bay_a...


Least surprising thing ever when the movement's logic resembles the philosophical version of pickup artistry (and I'd bet good money that it's a thick middle of the Venn diagram) in terms of justifying some shitty behavior with rhetorical sleights of hand.


I was always on the outside of these groups in the Bay Area, but saw enough to know this was happening with the polygamy subculture / group houses of EA.

Horrible.


As someone who has been invited on several occasions into Bay Area EA groups, my experience has been that many of the people who are involved are more concerned with some grand concept of ethics and skipped over an interest and education in simple decency. While only anecdotal, these reports are consistent with the type of environments I saw that would allow sexual assault/harassment to be more commonplace than in other social settings. Even in college fraternities, you are more likely to encounter someone with the common sense, decency, and willingness to tell guys who take things too far to cut it out. There's much more room for emotional investment and manipulation, and ensuing toxicity, within the groups described in the article.


> She noticed that EA members in the Bay Area seemed to work together, live together, and sleep together, often in polyamorous sexual relationships with complex professional dynamics

I personally think that this is a particular problem at the Bay Area EA “movimento” (whatever it mean) than with the EA itself.

I am a member of some groups in South America that organise a “big soup” (PT-BR: Sopão) for several homeless kids once a week and we collaborate with some of the Blind Helpers Association (PT-BR: Fundação Dorina Nowill) well before the EA “movement” and for me it’s one of the most fulfilling things that I had in my life and it’s quite sad to read to anecdotical articles like this to classify an entire silent and necessary movement like this.

Personally I think EA it’s more about of what an individual can do _without any kind of publicity_ than be part of some group and/or association.


Effective Altruism has a "make me think whoever is involved in this isn't a grifty scammer weirdo" problem


Surprised they didn't mention Dan Price. He should be the poster boy in all this. Some people smelled his $70k/yr bullshit a mile away, most people refused to believe it because it was a great story. Wonder where his supporters are now that there's a felony case against him with multiple accusers.


Is Dan Price associated with effective altruism?


That was a fraud to steal the business value from his brother/co-owner.


It absolutely was fraud, but people refused to believe it. He ran with it under this living wage spin and became some folk hero. But now he's a folk hero rapist.


I've been around long enough to see this kind of thing time and time again. I still remember my father getting involved with the local "Technocracy" movement, which is as kooky and self-serving as you'd expect from the name (but hey, it was the 90s).

Any time someone offers a simple solution or philosophy to solve complex problems, that should be the first giant red flag that they're full of shit and wishful thinking, with self-serving deception just around the corner.


A lot of people in EA have poor social skills and a lot of people in the community become friends and lovers. That probably explains most of this phenomenon.


You seem to be implying that "poor social skills" are a valid excuse for sexual misconduct?


An explanation is not necessarily an excuse.


So their godless little church has a sexual harassment and abuse problem? Welcome to the club.


I find it very hard to give to charity when any meaningful contribution I make can be immediately dwarfed by a single contribution from a whale. Maybe when / if there are less billionaires, I'll feel differently.


You see same set of problems in a movement of super wealthy men, esp if you are a female.


This is a problem with all the cargo-cults. Or with cults in general. Make money and indoctrinate ... That is what they are doing. Display versus real deed.


Does Time’s readership have any clue about Effective Altruism?


Well there's this high profile in the EA community who happened to have its name tied to FTX not just for being SBF's guru but also for being on the FTX payroll in FTX's early days.

But not only that: it gets better for he happen to have bought a mansion in the UK, supposedly for EA, with $15m of stolen money SBF gave him. (mansion which may be clawed back, I sure do hope it is).

The dude is a teacher at some fancy uni in the UK.

Tells me all I need to know about EA and the kind of people higher up the echelons there. It's obviously highly manipulative and I'm not surprised at all to see manipulative gurus misappropriating money and using their manipulative tactics to prey on women.

P.S: as he altruistically given back the mansion acquired with stolen money?


No, see, it's actually very complicated and difficult to maintain an optimal environment for thinking up new ways to mitigate the existential risk posed by an AI that has a .0001% chance to be invented ten million years from now, so really from a standpoint of perfect altruism Bostrom has to keep the mansion.


I think this may be a bit of a HN bubble. I can't imagine most people have a need for a social or philosophical framework for their philanthropic identity.


After SBF and FTX, yes


(Original title was too long for HN, did my best)


Well, this is unsurprising.


It's a movement invented as yet another way for rich, wealthy, and/or privileged people to not just feel better about themselves for being so but for being destined to be so for the supposed benefit of others. It isn't surprising to me in the least thay it is rife with other sociopathic behavior.


Is EA still a thing? I thought the post rationalist and e/acc communities were where the forefront of human civilization in the Bay Area is now.


> pollute the epistemic environment

Double Yikes


For those who don’t click the link, this is a deeply researched piece of journalism. Not an opinion piece.


I suppose if you see yourself as messianic, it’s hard to imagine why someone wouldn’t want to screw you.


Tech bros gonna tech bro. This is the nerd equivalent of highschool football team.


It's a movement invented as yet another way for rich, wealthy, and/or privileged people to feel better about themselves for being so. It isn't surprising to me in the least thay it is rife with other sociopathic behavior.


"effective altruism" hurts my ear


This is a free article, and for the same reason in Photography Books, on the cover there is usually a naked lady, in these free articles the story is also always about sex or something that appeals to your instincts.

Chances are this story was exaggerated up to a point to make it call your attention so you subscribe.

If something is free, you are the product.


I waded through many, many paragraphs to discover that some of the crimes included people making comments that made others feel uncomfortable. I was kind of expecting more.

So, yes, it looks like I was the product here. It made me feel uncomfortable!


Stories, in order to be interesting have to cause emotions, any emotion will do. We even classify movies by the type of emotions they evoke: horror, love, drama, humour,gore,... and this is yet another genre. It is not a free article, it is a bait to get you subscribed.


The readers discovered that EA is mixing sex with charity. It’s pretty weird.


It's weird, but the title isn't saying it's weird. The title is saying people are getting harassed.

Technically a cat call is harassment, that shouldn't make the news. This article has a click bait social justice title when in reality the title should comment about how EA is some sort of sexual cult.


[flagged]


Nature is quite clever. Powerful hormones are triggered in the female cycle that operate outside of the conscious mind. We are not rational creatures, especially when intoxicated.

https://pubmed.ncbi.nlm.nih.gov/15216427/


good


This EA thing looks perfectly bad, in that it appears to be charity, with absolutely no skin-in-the-game? Just nothing but consdesencion?


If someone (or a group of someones) needs a brand to conduct philanthropy, then they're a. lazy, b. not the sharpest rusty nail foot detector, and/or c. trying to get publicity and approbation for appearing to do good.

Philanthropy done right is anonymous, organic, and helps with basic needs as efficiently as possible.

Unfortunately, it sounds like a cult meets a swindle to me.


What? A bunch of autistic male nerds obsessed with how smart they are don't respect women? I'm so suprised.

Anyway, I dismissed EA because I dismiss basically all forms of utilitarian ethics. When you try to use math and statistics to form your entire moral basis, you can kinda justify anything you want, and get to some weird contradictory judgements as well.


Is autism correlated with disrespect of women?


Not directly. But it is directly correlated with disregard for others feelings, and general lack of social awareness. Which in the case of male EA nerds certainly seems to lead to a lot of situations in which women were disrespected to say the least.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: