Hacker News new | past | comments | ask | show | jobs | submit login

Rationalists have always rubbed me the wrong way too but your argument against AI doomerism is weird. If you care about first principles, how about the precautionary principle? "Maybe it's actually benign" is not a good argument for moving ahead with potentially world ending technology.





I don't think "maybe it's benign" is where anti doomers are coming from, more like, "there are also costs to not doing things".

The doomer utilitarian arguments often seem to involve some sort of infinity or really large numbers (much like EAs) which result in various kinds of philosophical mugging.

In particular, the doomer plans invariably result in some need for draconian centralised control. Some kind of body or system that can tell everyone what to do with (of course) doomers in charge.


It's just the slippery-slope fallacy: if X then obviously Y will follow, and there will be no further decisions, debate or time before it does.

One of my many peeves has been the way that people misuse the term “slippery slope” as evidence for their stance.

“If X, then surely Y will follow! It’s a slippery slope! We can’t allow X!”

They call out the name of the fallacy they are committing BY NAME and think that it somehow supports their conclusion?


I rhetorically agree it's not a good argument, but its use as a cautionary metaphor predates its formalization as a logical fallacy. It's summoning is not proof in and of itself (i.e. the 1st amendment). It suggests a concern rather than demonstrates. It's lazy, and a good habit to rid oneself of. But its presence does not invalidate the argument.

Yes, it does. The problem with the slippery slope is that the slope itself is not argued for. You haven’t shown the direct, inescapable causal connection between the current action and the perceived very negative future outcome. You’ve just stated/assumed it. That’s what the fallacy is.

He wasn't saying "maybe it's actually going to be benign" is an argument for moving ahead with potentially world ending technology. He was saying that it might end up being benign and rationalists who say it's definitely going to be the end of the world are wildly overconfident.

No rationalist claims that it's “_definitely_ going to be the end of the world”. In fact they estimate to less than 30% the chance that AI becomes an existential risk by the end of the century.

Adding numbers to your reasoning, when there is no obvious source for these probabilities (we aren’t calculating sports odds or doing climate science), is not really any different than writing a piece of fiction to make your point. It’s the same basic thing that objectivists did, and why I dismiss most “Bayesian reasoning” arguments out of hand.

Which content did you engage with that led you to the conclusion that they base their estimates with “no obvious source for these probabilities”? A link would be appreciated

Who is "they" exactly, and how can they estimate the probability of a future event based on zero priors and a total lack of scientific evidence?

> Who is "they" exactly

Rationalists, mostly self-identified.

> how can they estimate the probability of a future event based on zero priors and a total lack of scientific evidence?

As best as they can, because at the end of the day you still need to make decisions (You can of course choose to do nothing and ignore the risk, but that's not a safe, neutral option). Which means either you treat it as if it had a particular probability, or you waste money and effort doing things in a less effective way. It's like preparing for global warming or floods or hurricanes or what have you - yes, the error bars are wide, but at the end of the day you take the best estimate you can and get on with it, because anything else is worse.


But their best estimate is utterly worthless with zero basis in actual reality.

Well when you hear a possible disaster is coming, what do you do? Implicitly you make some kind of estimate of the likelihood - ultimately you have to decide what you're going to do about it. Even if you refuse to put a number on it or talk about it publicly, you still made an estimate that you're living by. Talking about it at least gives you a chance to compare and sanity-check.

Well I have some common sense so I don't react to things that I "hear" from random worthless morons.

> I don't react to things that I "hear" from random worthless morons.

Which is to say that you've made an estimate that the probability is, IDK, <5%, <1%, or some such.


The precautionary principle is stupid. If people had followed it then we'd still be living in caves.

I take it you think the survivorship bias principle and the anthropic principle are also stupid?

Don't presume to know what I think.

Don't make an argument based on survivorship bias then...

That's a logical fallacy. You haven't established any bias.

But not accepting this technology could also be potentially world ending, especially if you want to start many new wars to achieve that, so caring about the first principles like peace and anti-ludditism brings us back to the original "real lack of humility..."

The precautionary principle does active harm to society because of opportunity costs. All the benefits we have reaped since the enlightenment have come from proactionary endeavorers, not precautionary hesitation.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: