Hacker News new | past | comments | ask | show | jobs | submit login

That’s not how logic works. The GP is applying the precautionary principle: when there’s even a small chance of a catastrophic risk, it makes sense to take precautions-like restricting who can build superintelligent AI, similar to how we restrict access to nuclear technology.

Changing the premise to "superintelligence is the only thing that can save us" doesn’t invalidate the logic of being cautious. It just shifts the debate to which risk is more plausible. The reasoning about managing existential risks remains valid either way, the real question is which scenario is more likely, not whether the risk-based logic is flawed.

Just like with nuclear power, which can be both beneficial and dangerous, we need to be careful in how we develop and control powerful technologies. The recent deregulation by the US admin are an example of us doing the contrary currently.






Not really. If there is a small chance that this miraculous new technology will solve all of our problems with no real downside, we must invest everything we have and pull out all the stops, for the very future of the human race depends on AGI.

Also, @tsimionescu's reasoning is spot on, and exactly how logic works.


It literally isn't, changing/reversing a premise and not adressing the point that was made is not a valid way to counter the initial argument in a logical way.

Just like your proposition that any "small" chance justifies investing "everything" disregards the same argument regarding the precautionary principle of potentially devastating technologies. You've also slipped in an additonal "with no real downside" which you cannot predict with certainty anyways, rendering this argument infalsifiable. At least tsimionescu didn't dare making such a sweeping (but baseless) statement.


Some of us believe that continued AI research is by far the biggest threat to human survival, much bigger for example than climate change or nuclear war (which might cause tremendous misery and reduce the population greatly, but seem very unlikely to kill every single person).

I'm guessing that you think that society is getting worse every year or will eventually collapse, and you hope that continued AI research might prevent that outcome.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: