Hacker News new | past | comments | ask | show | jobs | submit login

> Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected

We can also apply the principle of epistemic humility to, say, climate change: we don't have a full grasp of Earth biosphere, maybe some unexpected negative feedback loop will kick in and climate change will revert itself.

It doesn't mean that we shouldn't try to prevent it. We will waste resources in hypothetical worlds where climate change self-reverts, but we might prevent civilization collapse in hypothetical worlds where climate change goes as expected or more severely.

Rationalism is about acting in a state of uncertainty. So

> I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is

goes as an unspoken default. And some posts on lesswrong explicitly state "Epistemic status: not sure about it, but here's my thoughts" (though I find it a bit superfluous).

Nevertheless. The incredible danger of an unaligned superhuman AI doesn't allow to ignore even small chances of them being right about it.

Though I, myself, think that their probability estimates of some of their worries is influenced by the magnitude of negative consequences (humans aren't perfect Bayesians after all (and that includes me and validity of this statement)).






Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: