The fundamental thing an exception does is allow you to choose where in the stack to react to the exception. The fundamental thing a checked error does is force you to actually choose (even if that choice is to bubble all the way up and crash the program), rather than forget and end up with a bug.
The choice about where to handle the exception is almost always a good thing, since it reduces boilerplate in dealing with that condition. The cases where it's not a good thing are when it's either impossible to deal with the condition at all (e.g. assertion failure) or where it must obviously be handled locally (e.g. errno==EAGAIN type stuff).
The requirement to actually make the decision is a good thing in some cases (e.g. I/O), but not in other cases (e.g. out of memory) because the overhead of making this decision so often is larger than the benefit of avoiding that class of bug. It's beneficial when it's something that can only happen while doing certain well defined activities.
There are also places where there just shouldn't be an error at all. E.g. null pointers shouldn't be an error; your program should be rejected if the compiler can't prove that this is impossible through static analysis (even if that just forces the author to write assert(ptr!=0)).
I guess I'm converging on a scheme that looks like: I/O errors are checked exceptions. Out of memory is an unchecked exception. Assertion failures are panics which cannot be caught. Division by zero is prevented by static analysis. I'm not sure what should be done with EAGAIN/EWOULDBLOCK type stuff -- it doesn't really feel like those should be exceptions, but introducing a whole new "error" handling idiom just for this doesn't feel great either.
Yes, that's intentional. They are still exceptions because there are rare cases where you want to catch them. They're unchecked because it's not worth the hassle of declaring them everywhere. Python's KeyboardInterrupt is another great example -- it can theoretically happen almost everywhere, but most scripts don't have anything sensible to do with it and it will never happen in a daemon or GUI program. If you happen to be writing a REPL, though, it could be useful to just interrupt the last command.
I'm not sure interrupt is the right idiom there either. An actor or coroutine dedicated to processing keyboard input seems more sensible, precisely because interrupts can happen almost anywhere, and non-determinism is not something you want to introduce accidentally or implicitly.
The choice about where to handle the exception is almost always a good thing, since it reduces boilerplate in dealing with that condition. The cases where it's not a good thing are when it's either impossible to deal with the condition at all (e.g. assertion failure) or where it must obviously be handled locally (e.g. errno==EAGAIN type stuff).
The requirement to actually make the decision is a good thing in some cases (e.g. I/O), but not in other cases (e.g. out of memory) because the overhead of making this decision so often is larger than the benefit of avoiding that class of bug. It's beneficial when it's something that can only happen while doing certain well defined activities.
There are also places where there just shouldn't be an error at all. E.g. null pointers shouldn't be an error; your program should be rejected if the compiler can't prove that this is impossible through static analysis (even if that just forces the author to write assert(ptr!=0)).
I guess I'm converging on a scheme that looks like: I/O errors are checked exceptions. Out of memory is an unchecked exception. Assertion failures are panics which cannot be caught. Division by zero is prevented by static analysis. I'm not sure what should be done with EAGAIN/EWOULDBLOCK type stuff -- it doesn't really feel like those should be exceptions, but introducing a whole new "error" handling idiom just for this doesn't feel great either.