He wasn't saying "maybe it's actually going to be benign" is an argument for moving ahead with potentially world ending technology. He was saying that it might end up being benign and rationalists who say it's definitely going to be the end of the world are wildly overconfident.
No rationalist claims that it's “_definitely_ going to be the end of the world”. In fact they estimate to less than 30% the chance that AI becomes an existential risk by the end of the century.
Adding numbers to your reasoning, when there is no obvious source for these probabilities (we aren’t calculating sports odds or doing climate science), is not really any different than writing a piece of fiction to make your point. It’s the same basic thing that objectivists did, and why I dismiss most “Bayesian reasoning” arguments out of hand.
Which content did you engage with that led you to the conclusion that they base their estimates with “no obvious source for these probabilities”? A link would be appreciated
> how can they estimate the probability of a future event based on zero priors and a total lack of scientific evidence?
As best as they can, because at the end of the day you still need to make decisions (You can of course choose to do nothing and ignore the risk, but that's not a safe, neutral option). Which means either you treat it as if it had a particular probability, or you waste money and effort doing things in a less effective way. It's like preparing for global warming or floods or hurricanes or what have you - yes, the error bars are wide, but at the end of the day you take the best estimate you can and get on with it, because anything else is worse.
Well when you hear a possible disaster is coming, what do you do? Implicitly you make some kind of estimate of the likelihood - ultimately you have to decide what you're going to do about it. Even if you refuse to put a number on it or talk about it publicly, you still made an estimate that you're living by. Talking about it at least gives you a chance to compare and sanity-check.