> whenever you know you're going to use large enough numbers in Java, you probably want to use BigInteger or BigDecimal
When you've made a conscious decision you can pick the right thing, sure. The problem usually happens when you haven't thought about it at all, which is why the default should be safe and the unsafe optimisation should be opt-in.
Maybe for LOB applications, it would be better if languages defaulted to arbitrary precision arithmetic. But scientific computing is also a huge field that often uses the same languages and there, arbitrary precision is often the completely wrong tool, e.g. it would make certain key algorithms (like Gaussian elimination) exponential.
I feel like this is just one of these things that developers should know about so they can make the correct choice, just like DB indices.
> arbitrary precision is often the completely wrong tool, e.g. it would make certain key algorithms (like Gaussian elimination) exponential.
Sure, but even in science taking a lot of time to deliver a result, or failing to deliver one at all, is much safer than silently delivering the wrong result. There's the concept of fail-stop if you want a rigorous approach to safety. There's no analogous safety model that says silent overflow is the safe option.
When you've made a conscious decision you can pick the right thing, sure. The problem usually happens when you haven't thought about it at all, which is why the default should be safe and the unsafe optimisation should be opt-in.