Hacker News new | past | comments | ask | show | jobs | submit login
Why checked exceptions failed (borretti.me)
108 points by rsaarelm on July 15, 2023 | hide | past | favorite | 314 comments



My opinion is the exact opposite see https://debugagent.com/everything-bad-in-java-is-good-for-yo...

Checked exceptions are unpopular since no one likes responsibility. But they are great when used right. Calls that must have proper cleanup after them e.g. SQL, IO are checked. The fact that this must be communicated via interfaces is hugely important.

There are "weird" problems such as stream close() throwing a checked exception. That's an API misbehavior that has a workaround thanks to try-with-resources.


This comes from a misunderstanding of the reason why exceptions where designed they way they were. The whole point of exceptions bubbling up w/o having to write support code to deal with passing exceptions further is to make it so that the purpose of the function is clear to the reader.

Go's exceptions have the same unfortunate property as Java's checked exception. And that's what makes Go's code atrocious. Every other line you see something like:

    if x, e := f(); e != nil {
        ...
    } else {
        return y, e
    }
It makes it very easy to make mistakes when you have to write a lot of repetitive code. You make typos, and because they often land on the "bad" path, they aren't immediately discovered. You have to memorize the state of your function wrt' variable initialization, because now you cannot automatically initialize and destroy them all together. You need to create a lot of helper variables whose purpose is only to transfer return value from one function to another...

If you think that you want checked exceptions, then you don't want exceptions at all. You are denying them the very purpose they were created for. But there are alternative ways to deal with unexpected events in program execution. Monads would be one of those. So... maybe just don't use exceptions?


No. This comes from a misunderstanding of checked exceptions.

Checked exceptions don't mean I need to handle the exception right now. They mean I need to either do that or declare throws. Declaring throws is fine it implicitly documents the code and enforces a similar requirement up the chain.

Checked exceptions aren't the default and shouldn't be.


Declaring throws end up leaking layers of abstractions in practice. Many thrown checked exceptions are not appropriate to pass above certain layers. It is often not OK to expose checked exceptions that are really implementation details up the chain—-it’s not a “similar requirement”—-it is exactly the same requirement and often an inappropriate one at many levels up the chain.

If you just pass exceptions up in throws, you end up causing cascading checked exceptions chain changes and creating a bunch of noise, which is what many times people resort to doing in practice, because they don’t have the experience and it is very rare/almost never done to have any kind of automatic enforcement/linters/static analysis that prohibits implementation detail checked exceptions from propagating past above certain levels.

It is almost never OK to propagate IOException, for example, past maybe 1 or 2 levels of private helper methods, almost never should be propagated past your class boundary and you have to think twice about adding throws to protected methods even, unless your subclass implementations are specifically implementing alternate IO calls.

So yes that’s why checked exceptions have failed. And when you actually take this extreme discipline to your code, yes you end up with a ton of try/catch handling noise and a whole lotta new exceptions classes, and it takes similar extreme discipline to always avoid poor/improper try/catch handlers. Refactoring code across certain boundaries becomes a much bigger hassle, so your end up discouraging refactoring across those boundaries, which leads to code that is more easily stale and design decisions that are much harder to back out of.


I think we're actually not that far off in our opinions.

Throwing an exception is an implicit part of our contract, whether we declare it or not. Since any method can throw an exception that implementation detail is already exposed we just don't know about it.

Yes. Propagating checked exceptions through a long chain is indeed a pita and as I said before, we need to be vigilant about using them correctly. But if I have library or infrastructure code that is doing IO/SQL, I still want people to know and handle the failure correctly.

I think they were never picked up in other languages because no one likes to tidy their room. You want to write a unit test or a hello world and suddenly there's a checked exception... No fun. Also it was used for many APIs that people felt were built badly (e.g. the infamous encoding exception, URL format, etc.).


> Throwing an exception is an implicit part of our contract, whether we declare it or not

I think this is the main reason why checked exceptions and errors as values work. The alternative is having to always specify all the exceptions in documentation so the caller knows what to expect, and hope that documentation covers everything and doesn't get out of date. That also has to propagate if the caller doesn't handle all exceptions, just like checked exceptions.

It's much better to encode this directly with checked exceptions or Result<T, E>. Why people like errors as values so much but dislike checked exceptions is beyond me. They're basically the same thing with different syntax.


You make a great point about exposing the exceptions of library code. Why _don't_ linters deal with this better? It seems like you could throw up a yellow squiggly on any public method that throws an exception outside of its package or some such thing.

I agree that it takes a lot of discipline to do this properly. Is that discipline alleviated with unchecked exceptions or would you say that doing things right takes the same work, checked or unchecked?


Checked exceptions should be the default, exceptions should just be rare. They are overused in almost all languages.


The fundamental thing an exception does is allow you to choose where in the stack to react to the exception. The fundamental thing a checked error does is force you to actually choose (even if that choice is to bubble all the way up and crash the program), rather than forget and end up with a bug.

The choice about where to handle the exception is almost always a good thing, since it reduces boilerplate in dealing with that condition. The cases where it's not a good thing are when it's either impossible to deal with the condition at all (e.g. assertion failure) or where it must obviously be handled locally (e.g. errno==EAGAIN type stuff).

The requirement to actually make the decision is a good thing in some cases (e.g. I/O), but not in other cases (e.g. out of memory) because the overhead of making this decision so often is larger than the benefit of avoiding that class of bug. It's beneficial when it's something that can only happen while doing certain well defined activities.

There are also places where there just shouldn't be an error at all. E.g. null pointers shouldn't be an error; your program should be rejected if the compiler can't prove that this is impossible through static analysis (even if that just forces the author to write assert(ptr!=0)).

I guess I'm converging on a scheme that looks like: I/O errors are checked exceptions. Out of memory is an unchecked exception. Assertion failures are panics which cannot be caught. Division by zero is prevented by static analysis. I'm not sure what should be done with EAGAIN/EWOULDBLOCK type stuff -- it doesn't really feel like those should be exceptions, but introducing a whole new "error" handling idiom just for this doesn't feel great either.


I think there are idioms where your unchecked exceptions can be caught, like via some scoped context, like an actor.


Yes, that's intentional. They are still exceptions because there are rare cases where you want to catch them. They're unchecked because it's not worth the hassle of declaring them everywhere. Python's KeyboardInterrupt is another great example -- it can theoretically happen almost everywhere, but most scripts don't have anything sensible to do with it and it will never happen in a daemon or GUI program. If you happen to be writing a REPL, though, it could be useful to just interrupt the last command.


I'm not sure interrupt is the right idiom there either. An actor or coroutine dedicated to processing keyboard input seems more sensible, precisely because interrupts can happen almost anywhere, and non-determinism is not something you want to introduce accidentally or implicitly.


What about: NullPointerException? ArrayIndexOutOfBoundsException? IllegalArgumentException? UnsupportedEncodingException?


Null pointer exceptions shouldn't exist in the first place, they're fairly trivial to avoid. Index out of bounds can be similarly avoided, although the cost in terms of economics is a lot steeper. I like Rust's strategy of providing a checked and an unchecked indexing mechanism, where the unchecked indexing mechanism typically crashes the program rather than just throwing an exception.

Illegal arguments can often be avoided in the first place with sensible types (although, like null pointer exceptions, this typically requires some language support), and unsupported encodings should be rare assuming widespread UTF-8.

So yes, I agree, those should all be very rare exceptions to find in a function signature.


Notice that parent comment was directed to a person who has the opinion of "everything should be a checked exception"... Not towards you.

Crashing the program isn't an option for a lot of applications. All those exceptions give me a runtime stack that I can log and an error that's very easy to fix quickly. The first 3 are runtime exceptions which means you won't know about them and don't need to write defensive code.

I love rust and its compile time checks. Checked exceptions are very similar to the concepts in rust, check as much as possible during compilation. However, this makes rust code non-trivial for some dynamic behaviors and sophisticated object graphs. There's a balance that we usually choose based on a languages target demographic. Rust is great for system programming, Java is great for enterprise programming. Both are very different domains with very different needs.

The discussion about null is a huge one and I'm already spending way too much time in HN comments. Forgive me if I don't open that damn can of worms ;-)


I think almost everything should be a checked exception, with some possible exceptions (haha) being things like out-of-memory exceptions (although even then, with a good type system, I can imagine them optionally throwing checked exceptions). So it's definitely written at me!

Rust is not the only language that handles these sorts of issues, and it's not the only way of doing that. OCaml also very neatly avoids most of these problems (again, indexing can optionally be checked), although it also includes exceptions on top of that. Even Typescript can be written in a style where exceptions, although interfacing with third-party code is a lot more difficult then. You can even configure the Typescript compiler to enforce checked indexing and prevent index-out-of-bounds errors.

The point is that there is nothing fundamental about these exceptions, other than that many languages have been designed with the assumption that they must be present and generally possible. But there's no reason to keep with that assumption, and we can write good code - and even good enterprise code, as demonstrated by Typescript - without those assumptions.

I find it disappointing that we expect so little from our tools that we tolerate these sorts of problems.


One of these things is not like the others ...

The first 3 need to terminate your thread because something is irretrievably incorrect. The only sane thing to do is stop.

"UnsupportedEncodingException" has lots of things you can do to recover. You can try different encodings. You can request retry of someting. You can check a CRC for corruption. etc.

So, the first 3 really shouldn't litter your code. The final one probably should.


Finally a person who knows Java ;-)

For other readers the first 3 are all runtime exceptions and the last one is a checked exception that a lot of people complain about...

I think the last one is an example of why people hate checked exceptions and the solution to this was to create newer APIs that focus on UTF-8 that just don't throw that exception.


Why is "UnsupportedEncodingException" an issue?

It should throw only on creation and then never be an issue from that point forward, no?

If it's throwing everywhere then, as you point out, that is an API problem rather than a problem with exceptions, themselves.


Yes. It's 100% an API problem...


What about them? If your language semantics are pervaded by exceptions, that kinda suggests your language sucks. If you're unlucky enough to find yourself in this situation, then make exception contracts simple to express and propagate.


They aren't simple to express and propagate?

What's so hard about declaring throws?


You have to think about exception polymorphism. If you don't have exception inference, you also have to think about manually propagating exception contracts. Declaring it once is not a big deal, doing it over and over again is super annoying.


That's great, think about it...

Let's say I have a Collection and want to do IO within that collection... If IOException was a runtime exception I could just write that code without handling it and an unsuspecting user of my new IOCollection would suddenly get an IOException. That means I need to explicitly deal with it in my class and can't add a serious exception to the behavior of the class.

Yes, it can still throw a runtime exception which is why the separation of the two is so important.

OTOH we have InputStream and OutputStream. Both throw an IOException for all their methods. So if I have one of them I should always handle the exception which is always the right thing to do...

But you might say, wait... What if I have a theoretical InputStream that will never throw an IOException?

Don't fret, we have that. It's called a ByteArrayInputStream and works roughly like this:

  ByteArrayInputStream bos = new ByteArrayInputStream(new byte[100]);
  int value = bos.read();
Notice I didn't use try and catch. Why? Because neither the constructor nor the read method throw an IOException which is legal in Javas polymorphism implementation. However, the following code won't compile without a catch exactly because I need to handle IOException for the generic case:

  InputStream bos = new ByteArrayInputStream(new byte[100]);
  int value = bos.read();


What do you mean when you say "exceptions should be rare"? Do you mean they should not occur frequently during runtime, or that it should be rare to see explicit exception handling in a piece of code?


Situations in which you must raise, propagate and handle exceptions should be rare.


Parsing input, handling I/O is a good chunk of what a lot of run of the mill software does. Since this is Java and exceptions are the main way of expressing fallibility, I’m at a loss as to what should code that needs to indicate fallibility should be using other than exceptions.


Return values obviously, with suitably designed method contracts. Some (maybe most) types of I/O errors are proper exceptions. Parsing failures are not.


So now you have two different error paths, one in which the file can't be accessed, and one in which the file can be accessed but the contents are corrupted? The developer must handle exceptions and they must still check if the function returned some sentinel error value?

What's the point of forcing two different error paths on the developer?


They are different classes of errors, so you want different error paths. Parsing errors are common and should be handled locally with all of the usual logic. Exceptions are not supposed to be common errors which is why they trigger non-local control flow.


Agreed. Too often exceptions end up driving normal control flow, when the should just be...exceptions.


You must have a real panic when you see a sign saying “No parking except weekends.” Weekends happen all the time!


So if you write code that handles dates you'd throw NotWeekendException to notify the caller that it's not weekend?

Exceptions should be for exceptions, not control flow.


I'm curious how this is possible? Exception throwing and handling is fundamentally a flow control construct. ie: throwing an exception must control flow. A counter-example snippet where throwing an exception does not control flow would be appreciated.


He means exceptions are not for common control flow that shows up everywhere, like a return value. You want an exceptional control flow change when your hard drive runs out of storage, but you don't want exceptional control flow change when your hashtable doesn't have the key you're looking for.

The categories and frequency of these conditions are very different.


Hmmm, sounds like it just transforms into a Sorites paradox, no?


I don't think cases encountered in real work are really that unclear. It can sometimes seem unclear if you're used to language semantics that are inherently riddled with possible error states (like languages that allow null).


An exception would be if someone tried to get the parking status for "Blarghsday" from the function or some other impossible state (We all know time is weird[0])

If the expected operation is to return parking allowed status for a specified date or a weekday, I'd expect the function to return true or false unless the input is malformed (Actually I'd prefer it to return some kind of object that can give more context to the true/false, but I digress).

During normal operation it should never throw an exception. Because if I get an exception, I'll most likely either throw it up the stack or log an error, which will ping someone in PagerDuty if it happens too often.

[0] https://gist.github.com/timvisee/fcda9bbdff88d45cc9061606b4b...


I'm afraid that this has left me more confused.

What's normal operation? It sounds rather subjective.

It sounds like there are those who "expect the unexpected" and those who don't, and this plays out in weird programming language usage discussions. Among all the expectations that one could have about a program, people have the impossible choice of being either complete and inconsistent or incomplete and consistent in their operational definition of "normal". Is it not?


It says the word exception right there on the sign.

People keep parroting “exceptions should be for exceptions” like it means anything. Just because when it was first used it was given a particular name doesn’t really tell you anything about how it should be used.


You're right, weekends happen all the time, which means they're not exceptional, so you shouldn't throw an exception if you encounter a weekend.


> Checked exceptions don't mean I need to handle the exception right now.

Yes it does. You either use try-catch-catch-...-catch, or you append the new exception to the ever growing list of exceptions your function might throw. Even worse: when two functions called by the function you design can throw the same type of exception but for different reasons (eg. two functions cannot find a file, but those are different files), then, if you were a diligent and mindful programmer, you'd have to catch both and re-throw with new types, so that upstream could differentiate between the two failures, adding even more crutft into your code.


If it's an evergrowing list then you're using checked exceptions wrong and they highlight to you the problem of too many responsibilities in a single application.

Notice that IOException is one exception, you don't need to declare FileNotFoundException because it is an IOException. There's a similar hierarchy in an SQLException.

That's part of the beauty of checked exceptions. If your throws statement becomes too long or its stack carries too deep then you know you have a problem. This would be hard to notice otherwise.


> Go's exceptions have the same unfortunate property as Java's checked exception.

Wait what? Here's where you lost me. You don't "throw" errors in Golang. Instead it's common practice for functions to return multiple values, one of which can be an error. Errors are just structs that implement the Error interface.

The language (and the compiler by extension) doesn't make you explicitly handle errors. On the contrary: you can choose to ignore errors altogether.

On the other hand, a checked exception in Java is one that MUST be either caught or declared in the method in which it is thrown. Code that fails to do this won't compile. You're completely wrong here.


The point GP was making is that both Go and Java (in code which makes heavy use of checked exceptions and different exception types) mix together the business logic with error propagation. So, even though in Java you throw exceptions and in Go you return error values, they both end up (or can end up, in the case of Java) having lots of error handling boilerplate all around a function.

In Java of course this doesn't have to happen, but it can end up happening if you chose to have many exception types and different exceptions types when crossing layers (e.g. try { doX(); } catch (IOException e) { throw new MiddleLayerException("failed to do X", e);}).

To be fair though, Java still allows you to write that like this:

  try{ 
    x = doX();
    y = doY(x);
    z = doZ(y);
  } catch (XException | YException | ZException e) {
    throw new MiddleLayerException("...", e);
  }
Which is still better than Go's:

  x, err := doX();
  if err != nil {
    return MiddleLayerErr(err);
  }
  y, err := doY();
  if err != nil {
    return MiddleLayerErr(err);
  }
  z, err := doZ();
  if err != nil { 
    return MiddleLayerErr(err);
  }


It's weird to argue that null is how the hardware works. Many structs contain only bytes that are allowed to take on any value, so making them nullable takes extra space.


This is mentioned in the memory layout section in the post.


I'm not sure what "a marker" is and how to make it take 0 bits of storage.


This is an upcoming behavior in Valhalla which isn't yet a part of Java so there's nothing final. Currently Valhalla has primitive and value object types that determine identity behavior.


You mean non-nullable object types then. Which are faster and take up less space because that's how the hardware works :)


They're not faster for all cases. Near memory works great for some edge cases such as looping on an array of data in a block.

This is very much a special edge case. For many other non-benchmark situations this isn't so simple. If you add the overhead of memory copying and lack of identity things are more complex.


> Calls that must have proper cleanup after them e.g. SQL, IO are checked.

Why? In C# you use a “using” block and whether your code exits the using block successfully or with an exception, the cleanup is automatic.

RAII has been around at least since the 80s with C++.


We have it in Java too. For us it's try-with-resources and has nothing to do with checked exceptions. Checked exceptions remind you that you need to wrap the code and need to take that into account.

They also force you to declare that an exception is thrown if you don't want to handle it in the current method. That's important as the signature of the method carries an important failure that can't be dismissed and is enforced by the compiler.


> important failure that can't be dismissed

But importance of the failure is determined completely by the program, not the library.

Grep fails to open a file for reading -> message the user and exit

Nuclear reactor controller fails to read important a file -> initiate reactor shutdown or something.

If file read is critical, you have to handle failure no matter what the interface is. Because you know that disk can fail.


Grep isn't a library. It's a program. Some things are important in the library level e.g. an SQL exception must have proper cleanup. However, in some cases you don't want to cleanup. You might want to try a different approach so generic cleanup code isn't necessarily the right thing.

E.g. in the case of grep. Say I wrote grep and want it to be generic. I wrote a library that implements grep. Then I write a grep GUI tool. OOps. It exits if the file isn't found instead of showing an error dialog. With exceptions this is communicated up the layers. That's their purpose.

If I'm writing generic code and that code is used in a nuclear reactor I would very much not like my failure code to decide what to do. That's why we have exceptions, they punt the responsibility to the next person up the chain. I have no idea how to initiate a nuclear reactor shutdown etc.

But as API authors how can we make sure the person who writes the code up the chain knows that this is something crucial?


Yes, grep and nuclear reactor controller, and your gui grep are applications.

They all use library for file reading.

As author of file reading library you don’t know how critical is the failure to open a file or how it should be handled. My point was that You know those things only at application level.

So it feels a bit misguided to decide at library level which failures are important. (I.e. which are checked vs which are unchecked)

> But as API authors how can we make sure

First thought: documentation. (which is required for both, checked and unchecked exceptions).

But overall, it’s not library author’s responsibility or ability to “ensure”. You can’t force correct handling of exception. At best one can make it a bit more annoying to ignore, and convenient to do the right thing.

My view:

- all exceptions should be unchecked

- assume all code throws

- if “log and exit/continue” is not enough and you need to know exact exception types, dig into the docs.

- read docs during lib version upgrades


The library knows which are crucial issues that warrant your attention. A checked exception is a regular exception, it's just one that you can't miss and must either handle or propagate.

As a library author if you don't feel you need it then don't use it. It's optional. That's the beauty of it.

> First thought: documentation. (which is required for both, checked and unchecked exceptions).

Documentation < compiler checks.

> But overall, it’s not library author’s responsibility or ability to “ensure”.

Here we differ. Part of that is the domain we spend most of our time in. If you're giving me the example of grep then great, I see your point. Notice I'm giving an example of a high availability huge enterprise app.

Assuming all code throws forces overly defensive code which can cause a lot of problems. Yes, there's always a "catch all" but that's not a valid case for these sorts of apps.

Exit or continue are never enough for these sorts of applications.

We have dozens of dependencies and sometimes more. Some of them have mountains of documentations. Often we delay updates since these are enterprises. Going over 3 years of diffs in docs is just not a feasible option. Just the diffs in Spring Boot alone and all its dependencies would take several years.


> The library knows which are crucial issues that warrant your attention

The library author can’t know what’s crucial in the context of my program.


Indeed. For most applications, an out-of-memory exception is typically one that's not necessary to handle and just let it bubble to the top, as there's not much sensible you can do if you can't allocate a few hundred bytes for a new object.

However I've written several pieces of code where allocating a large buffer could fail but where this failure was not crucial. So I handled the out-of-memory exception and just moved on.


This feels argumentative but fine.

If you're unsure then make it a runtime exception. I agree that a lot of the problems people have with checked exceptions is that people over use them.

You don't want to handle the checked exception wrap it with a runtime exception. But as an author if there's something important, I want to give the user of the library as much help as I can.


But as an author if there's something important

You're still ignoring other people's point in this thread: you (the author of a library/package) cannot decide what is important for the users of your library. You can guess, but your guess will always be wrong for a subset of your downstream users.

And speaking from the other side, as an application developer: for most of my code, an ArrayIndexOutOfBoundsException is a pretty serious exception. How do I go about making it a checked exception?


ArrayIndexOutOfBoundsException is a runtime exception...

You're ignoring the fact that I'm not saying EVERYTHING should be a checked exception. Quite the opposite.


When I’m using grep, it’s usually in the context of shell script. For all intents and purposes it’s a library.

I can choose to handle the error myself or let the shell script crash by having “set -e”.

Grep returns a non zero error code and it’s still up to me to decide how to handle it.


That's your point of view. It's up to you. My experience is different and often deals with handles very large projects with serious continuous uptime. When you have dozens of developers committing to a project and using the APIs exiting with an error just isn't an option.


I did say

> I can choose to handle the error myself


That's not a practical strategy in a large system. That's why we have exception handling.

When we write generic library code we have no way of knowing how to handle an error in the full system. We want to concentrate some of the error handling while still making localized decisions in various places.

E.g. I want metrics and observability details updated, yet I want locally to retry some behavior. Doing this every time there's an error would force every library and every part that can fail in my code to know how I plan to handle the error.


Isn’t that what I just said? It’s my responsibility as a consumer of the library to decide how to handle it?


Then I must have misunderstood you as speaking from the perspective of the library. Sorry.


But the point is that in both cases the failure needs to be handled somewhere and somehow. Surfacing the error might be a valid option, but to make the decision you need to know that the error is there in the first place.

This can be solved with good documentation, but it can also be solved in the type system. Typically, if I can get my computer to do work for me (e.g. make sure that I've handled all possible error cases) then I'm much more confident in my code, hence why I see a lot of value in a well-designed checked exceptions system.


Exceptions aren't normal parts of the type system though, they're a way to enforce control flow to deal with them. Another way to do this is with a Result type, which would actually be part of the same type system that you use in the rest of the language.


Why can't they be part of the type system? Rust is a good example of a case where they are part of the type system (albeit in a more monadic form). You've also got typed effects, where effects are essentially a generalised form of exception.


> But importance of the failure is determined completely by the program, not the library.

Exactly. I think this is the real crux about what's wrong with checked exceptions. It puts the responsibility to decide what exceptions are important on the library, where it doesn't belong. Only the user of the library knows that.


Isn't it exactly the opposite? Checked exceptions are for libraries to declare "exceptions" they can't handle themselves. You, the user of the library, have to deal with them (or declare them checked yourself¹).

I'm not a friend of checked exceptions myself, but I still think it's the opposite.

¹ which leads to the real issue with checked exceptions: they propagate through dependencies, if one nested dependency adds another checked exception, all dependencies have to add the exception or handle it themselves.


I don't think we disagree it's just a different perspective. The forced handling or propagation is what makes them annoying, but I think they're conceptually wrong.

It would be another matter if they were designed such that you could fix an issue and continue the call on the happy branch, but I suspect the cases where something like that would be applicable are very few.


Grep fails to open a file for reading -> message the user and exit

That should be: message the user and process the next file


:)


What “reminder” do you need to wrap your code? I can’t think of an instance that you wouldn’t wrap your code in a try/catch block around your code and either log and crash, log and continue, or for a specific exception that you know about, do “something”. But most of the time it’s the first two if you’re using RAII.


Let's say I write an API for handling IO. DB or File or networking or anything really.

I get a failure, what do I do?

I can throw an exception, but this is important how do I know the user of my API will actually do the cleanup after me. I can't do the cleanup since this is a part of an API not the actual usage of the API. How can I give the user of this API the right set of hint that "you need to pay attention to this" without worrying too much.

Checked exceptions are that. A large part of the problem is when people use them for things that aren't important, but when they are important I can just declare a "throws" and know that someone will handle the exception along the chain.

With runtime exceptions I have no guarantee.


Any chance you have a code/pseudo code example of this?

My naive take on this would be “just use raii/closable”. (But easily and likely, I misunderstood)


Let's say Java had no checked exceptions. I could just write IO code without doing a try since no one is forcing me to check anything.

But let's go further. Let's say I have an API x(). This API accesses my database as part of a larger transaction.

I invoke x() and it fails with an SQLException which is declared. I can revert the transaction or I can choose to retry x(). The checked exception notifies me that there's an important decision I need to make at this point, cleanup is one option but in some cases there are more. Furthermore, cleanup isn't always enough. In a case of an IOException I'd often want to notify the user e.g. if the disk is out of space I want to show an error message somewhere...


On its own, seeing ‘SQLException‘ in method signature, does not provide you with information: “this is part of bigger transaction, and you have ability and responsibility to retry or rollback”.

That will be part of lib documentation, which should be read & understood regardless of existence of checked exception.


No. The transactional context is something the caller to the API knows. The person implementing the API that might fail on the SQL call doesn't know what's going on. That's why we would let the exception propagate.

> That will be part of lib documentation, which should be read & understood regardless of existence of checked exception.

First, checked exceptions *are* documentation for the lib. The best kind of documentation.

Second, really?

I would love to live in your world where people read documentation and where we all perfectly update the docs for everything. But both sides of this equation leave a lot to be desired in my world.

I don't read the documentation of most changes to most libraries. If I did that I would never get anything done. I can't even keep up with every commit that goes into the project I'm running. There are too many changes and too much code (I'm talking 30+ non-trivial merges per day).


> Let's say Java had no checked exceptions. I could just write IO code without doing a try since no one is forcing me to check anything.

Any decent C# linter will notice that the class exposes (IDisposable) and warn you that you should at least be wrapping it in a “using” block.


That's true for Java too. But that's only a part of the problem which is why I gave the second example.

Also, notice that a linter and a compiler error are different. I agree that people *should* always use a good linter and IDE. The reality is sadly far from that.


This is what try-with-resource does in Java. Introduced in Java 8 I think so better late than never


Anytime I see “something was added to Java” late in the game, it’s usually an ugly hack.

But I have to admit that the syntax for it is actually sensible.


Not exceptions, but zig has explicit error return types, and it works great. Especially since you can also just "tell the compiler to figure it out". There are a few cases where the compiler can't figure it out, but in those cases you grudgingly annotate the error types and it's done.


From your OC:

> Null is fast. Super fast. Literally free.

Yes and: The JIT will optimize away Null Object method invocations.

All my composable classes have a Null Object. No null checks means concise iteration (eg graph traversal). Eliminates NPEs. Just as fast.

Win / win / win.

Optionals (classes and operators) continue to be turrible mistakes. So much unnecessary effort, so little benefit.

Ditto @Nullable and @Nonnull.

Someday, maybe, I'll make a javac compiler plugin which autogenerates Null Object implementations and converts "Node abc = null" to "Node abc = Node.DEFAULT_NULL_OBJECT_INSTANCE".

https://en.wikipedia.org/wiki/Null_object_pattern


It seems like when one is implementing Null Objects and making incomplete attempts to prevent values from ever being null it is not really an argument in favor of nullable values being good and the only type of values that should be present in a language.


I'm not smart enough to parse this. Try again?


"All non-primitive values are nullable references" is a feature of the language. You posted that you are trying to avoid using that feature (instead using non-null references to a special Null value?) and trying to avoid having null references for the types you create. It seems like you do not actually think the feature is a good feature.


> Ditto @Nullable and @Nonnull.

I was referring to the annotations. My proposal moots them.

References would still be nullable.


Can you explain why it is good that null is a valid value for your types at the same time as you are trying to prevent them from ever containing that value?


You're overthinking this.

Semantics of 'null' are the same. Underlying implementation is different, better.


If the semantics were the same, you wouldn't need a replacement and all your proposed changes would have no observable effect on the behavior of any program.


Semantics != behavior.

In truth, I'm having hard time deciding if you're trolling or actually truly, sincerely not understanding this.

Read the wiki page. Implement a few Null Objects. If you still have questions, about this trivial concept, then I'll be happy to go a few more rounds.


Checked exceptions failed because 99 times out of 100 the exception is not recoverable, so the try/catch block is just wasting everyone's time. (In 7 years as a Java dev I can think of one time I wrote code that tried to recover from IOException instead of just making the caller retry.)

Even when the exceptions is theoretically recoverable, it has to get propagated up properly to the caller who should be handling recovery from it. But checked exceptions don't propagate sanely through executors and across RPC calls, so good luck with that.


I find the rust `?` construct nice for this: give me the success result or propagate the error. It is based on a result type though.

The try/catch construct require too much code for the common propagate case.

It would be nice if Java had a similar construct for error handling.


The Task type in C# with the await unwrap sugar is similar to this. You can even check for and grab the exception without throwing if you want to.

Still, it's too bad the error type must be a throwable. I kind of wish it could just be a plain type so you can error or cancel without generating stack traces. Awaiting a failed task could still throw.

Would be a nice perf boost. As it is now, you don't want to actually cancel or fail a C# task in performance critical code. You need to successfully complete the Task and return an error, which is pretty confusing.


"throws IOException" is too much code? Or is the issue more that you can't really do autocoercion to a declared thrown type in Java the same way that you can do in Rust?

Proliferation of types is an issue in Java, but the whole language has that problem. It's not just exceptions.


Personally if I were to argue why rust error handling is better than Java it wouldn’t be the too much code part.

I feel the same way about Go vs Java or Zig vs Java, I think errors as values makes more sense.

It’s the laws of the universe breaking and control flow changing on errors that I think most people hate.


Yeah, that'd be nice.


Though I mostly agree it's slightly more subtle IMO. They failed because the writer of the method cannot know what the caller can recover from but must decide before compile time. The caller gets little say.


I don't think they are meant to be recoverable from, but instead they are a way to provide a controlled shut down of an irrecoverable failure, or to limit the blast radius of a localized failure.

In that sense, they are quite useful. Saving the file generated an exception - instead of suddenly closing the application, display an error. Or maybe a database is not accessible anymore. Instead of suddenly ending the service, trigger a controlled shutdown (log, send alerts, etc).

I don't think anyone would expect exceptions to propagate through RPC calls. A call fails, probably containing a description of the fail. Why should it propagate?

There are ways to handle errors and return errored RPCs without exceptions, such as internal error codes. But error codes are just as irrecoverable as exceptions.


In a desktop application that is not allowed to totally crash, or at least has to crash kind of gracefully, checked exceptions are useful.

But in the world that most Java devs live in, which is various flavors of RPC server, failing requests is fine. If lots of requests fail, your monitoring infra should page someone, and that someone will go log spelunking and figure out what's broken.

Very occasionally it turns out that the thing causing the RPC failures is a recoverable exception, and then you should wrap the problematic stuff in a try/catch block. (Often you'll wind up having to detect the recoverable error case by conditioning on substrings in the exception message, which the library owners will arbitrarily change in future releases. So make sure to write regression tests so you'll catch this when you upgrade the library. Java is fun!) But 99% of the time the failure is "network is busted" or "config was invalid" or "hard disk failed" and you should not be defensively programming against all those possibilities.


> you should not be defensively programming against all those possibilities

This is where libraries and frameworks come into play - they defensively program against that for you. And wrap it all up in a simple interface with, well, checked exceptions.


If I need to write some bytes to an S3 bucket but the network is hosed, there's literally nothing useful I or any library can do until the network is back up.

RPC calls will fail, error logs should get written, and a monitor should get triggered, and someone should get paged so they can wake up and figure out why the network is hosed.

Nowhere in there is it useful for me to wrap all my S3 writes in try/catch blocks.


Error logs and monitor needs to known the operation failed somehow, try/catch or return value.

> literally nothing useful I can do

For some services, for others queuing the operation for retry is a thing.


Error logs and monitors find out because the RPC fails, which causes an error to be logged and a failed request metric to be incremented. (The UncaughtExceptionHandler will do the same for stuff failing in background threads.)

If eventual consistency is important, enqueuing for retry has to be done BEFORE you try to write, not on write failure. If/when the write succeeds, you remove the item from the retry queue.


I always feel like the error domain changes as you go across program domains.

I keep returning to something about error handling that bothers me. People argue what the proper way to handle errors is without really considering that it's highly context sensitive. Which makes me think you should be able to pass an error handler down to lower level functions that tells them what to do when something bad happens. Sort of like recent ideas where you pass functions a allocator instead of them calling malloc() directly or whatever.


Is this a problem with checked exceptions or a problem with Java? Couldn't exceptions be treated as robustly as param and return types in the generic system so they are more composable? Why not a Future<V, E1> interface?

The argument that some exceptions will always be runtime (like oom) so checked exceptions are flawed is harder to argue against but I would also say it's a matter of opinion whether you feel like it's worth dropping entirely.


You can return union types in Java if you want that, although it'd be nice to have better first class support for them.

The issue though is that forcing people to handle irrecoverable exceptions is just kind of dumb. If I do file IO, I know it'll barf sometimes. Sometimes I care, but the vast majority of the time I don't.

A lot of Java servers have RPC endpoints that basically say "write X to file Y" or "read X from file Y" and if the S3 connection is busted, there's nothing the server can do about that. Someone has to go log spelunking, figure out what's wrong (network is misconfigured, AWS is having an outage, whatever) and fix it. So it is bad API design on Java's part to make every single one of those RPC endpoints wrap every single thing that does file IO in a try/catch.


Checked exceptions are used to make up for Java's inability to return more than one value, plus it's inability to wrap two values without defining a new type. In other words, I think checked exceptions are basically a symptom of a lack of object literal syntax. This is in addition to their status as a "cool language feature" that is a siren song to new, bright programmers looking to spice up their designs. Exceptions are in general problematic (they move the program counter in a disjoint way - "non-local change of control" I think it's called) so it makes this particular siren song doubly deadly.

It would probably be handy if someone wrote a pamphlet on "Refactoring Exceptions" to give teams the confidence to refactor exceptions out of their code. I'll offer $20 to Martin Fowler to write such a thing.


Exceptions “don’t move the program counter in disjoint ways”, they are part of the “structured gotos”. In fact, it has the same control flow as an early return does, with the handler being locally found in a parent’s (recursively) method body.

Also, the point about multiple return types is pointless — it already has Optional, a proper Return type is completely feasible to implement and use in Java. So is a Pair<A,B> if tuples are what you mean. Some are just not part of the standard lib by default.


But the issue about the multiple return types is not pointless at all: Optional types, and pair types, were added to the language way later than checked exceptions. We didn't even have actual generics back then! So the checked exceptions really were a way to get around lacking alternative error management features.

If backwards compatibility wasn't a concern, checked exceptions would probably go away in a version or two, and we'd have some kind of monadic error type instead. But Java takes this seriously, and the standard library itself has methods with checked exceptions, so the timeline to go from checked exceptions to something else is very long.


> Return type is completely feasible to implement

Yet it is rarely done. Lots of boilerplate to replace what is easily done in other languages. If java provided a native tuple type, new patterns would appeat. As it is, too many lines for 1 off returns. Send off a serialzed json and deserialize it later. Easier than specific dtos.


>with the handler being locally found in a parent’s (recursively) method body

Yeah, that doesn't sound disjoint /s


That’s the exact same thing as if you had a function call as the last statement in a functional call as a last statement, etc. Literally just popping off stackframes.


Popping 1 stack is a normal return; popping N stacks is precisely what is meant by "no-local (or disjoint) transfer of control".


It pops one stack at a time only. It’s up to that function call to determine how to proceed further, the exact same way how it would happen with a last-call-chain I was talking about.


FYI: With Java 21 (2023/09/19) this is finally possible.

Relevant JEPs:

- 441: Pattern Matching for switch

- 440: Record Patterns

- 409: Sealed Classes

- 395: Records

It is kind of similar to Scalas version of pattern matching with case classes.


And it won't get adopted any time soon due to backwards compatibility and existing code. Anybody who wanted to use it used vavr at this point.


Just came to say this. I think the best way to represent failure is in the return type, but Java made that very cumbersome and noisy.


This is a very old topic. Insights from Anders Hejlsberg:

https://www.artima.com/articles/the-trouble-with-checked-exc...


Great article with a lot of pithy insights. Here's one of the core issues with checked exceptions. 2nd sentence to the last is the point.

> Anders Hejlsberg: Yeah, well, Einstein said that, "Do the simplest thing possible, but no simpler." The concern I have about checked exceptions is the handcuffs they put on programmers. You see programmers picking up new APIs that have all these throws clauses, and then you see how convoluted their code gets, and you realize the checked exceptions aren't helping them any. It is sort of these dictatorial API designers telling you how to do your exception handling. They should not be doing that.


The issue with all exceptions, and error handling in general, is that only very rarely is there actually recourse for an error. For instance, in an HTTP request handler, the vast majority of errors will end up as something like a 500. Checked exceptions are annoying because they make you explicitly handle an error when you likely already have a handler in place to handle all exceptions, checked or otherwise.


To me, you kind of hit both the pros and cons of checked exceptions. It makes the calling code have to deal with an important error case. This is actually super useful for very important error situations (for instance, let's say you're writing some type of file-processing class, and the class was unable to open the file for some reason, it might be a good situation to throw a checked exception as this can probably happen reasonably often).

To me, the difficulty with exceptions is when writing a method that needs to throw an exception is deciding whether it should throw one that is actually checked. In fact, read a few of years ago most exceptions should actually be unchecked, but at least then, this was still greatly up for debate.

... as I type this, am realizing having the option of unchecked and checked exceptions is nice in that they reduce the amount error code you have to write. Because by throwing unchecked exceptions, this is error handling that all the entire calling code-chain doesn't need to deal with. The obvious situations are system errors like out of memory errors, missing resources... but also those programmatic errors that are unexpected, like array out of bounds, ClassCastException.

Also, the calling chain then has the option to handle it if it's needed. For instance, if you're writing a low-level messaging queue that needs to report system errors (btw, this was a real situation that happened to me). Just my two cents.


Exactly, the boilerplate overhead isn't worth it when 95% of times you don't have anything meaningful to do with the exception but let it propagate.


While I broadly agree with this, the author doesn't appear to go far enough himself.

> Functional error handling, using Option and Result types, is rapidly becoming the standard operating procedure in essentially every language, because it relies on nothing but values and types. They are more of the same, and so they fit right into the existing language machinery.

I agree that Optional types are better than checked exceptions, but they are worse than union types. A function with the union return type String|Error (read: String or Error) is allowed to return another function with return type String. Similarly, a function which accepts the type String|Error as a parameter also accepts parameters of type String. With Option types this doesn't work. Option<String> and String are incompatible types, so code has to be rewritten if the types change.

Personally, I would go so far as to say that union types should replace exceptions in general, checked or unchecked, as well as any implicit nullabiliy, which can be replaced with the explicit union type Foo|Null.

(Although some special syntax for handling Foo|Exception or Foo|Null is probably a good idea, as "error" and "nothing" are pretty general categories.)


I often run into the opposite problem: There are union types, but I would really like sum types (i.e. tagged unions). The common case is a data structure where I cannot guarantee that the user will not want to use Null as a value in my data structure, but I also need to represent the absence of values myself. With a sum type, my absence would be None and the user’s absence would be Some(None). With union types, I need to somehow create an extra sentinel value myself.


Something like String|NoAnswer|None doesn't sound too bad to me here...


This isn’t just a problem with custom types, it affects generic iteration types too. With a next() -> Option<T> method, an Iterator can iterate over any type of element.


Yeah, but there could be a type T for "any type" additional to union types, basically like TRUE in Boolean logic. (Theoretically you could also have F for "no type", intersection types (AND), negation types (not Foo), and any logical complex of those.)


Except that NoAnswer has none of the language’s (and it’s library’s) support for the built-in None. And you might run into problems when trying nest the data structure, which might be a reasonable thing for your users to do.


I just think that the nesting offered by Option & Co has overall much more downsides than upsides compared to union types.


Yeah Containers like Option<T> and Result<T> not having a proper subtyping relation to T is major flaw. I mean you are basically giving a stronger guarantee if you are returning T instead of T or an Error, yet you break every callsite.

I have been thinking about this in terms of a data oriented programming language like clojure, but having types/schemas. If you use Union Types for something like getting a value out of a map for a specific key, then if you knew the schema of a specific map you could reduce the return type from {T, Error} to just the type of the value T that you know is there.

Basically a sufficiently smart compiler with the necessary information could make you not have to deal with errors at all in certain cases. With Result/Option/Maybe this would not be possible. It would always infect the entire callstack and you would always have to deal with it.


The problem with union types for exceptions is that it becomes impossible to return an error as the correct case, or at least there is no way to discriminate any more. Functions that deal with errors also need to be able to have exceptional situations, and in many cases there are completely generic functions as well that can error out for generic reasons. A basic example is the elementary case of having a list of errors for some reason and indexing out of bounds producing an error, what if one have a list of values, which are all derived from itself accessing lists, each of which contains “Ok:Val+Err:OutOfBounds”, accessing that list itself now becomes indeterminate.

The other issue is that the language needs to have support for union types rather than merely sum types. And there's a good reason almost no statically typed language has support for union types since it removes provability and it's not something that can in easily be statically decided, requiring dynamic type tags, but then again, that's what sum types effectively are as well.

Typed Racket is statically typed and has union types, but it's an extra layer on top of a dynamically typed language, so the type tags where there already, but it's not easy to erase type data at run time not to implement.

In your example of “String|Error”, it essentially requires that every String is now tagged at runtime with tag information about it in practice.


> And there's a good reason almost no statically typed language has support for union types

For what it's worth, I learned about union types from Ceylon, which was statically typed.


Which is a very obscure language.

Type erasure at runtime simply isn't really possible with full union typing and looking at it as a consequence Ceylon has to box everything into a structure which contains a type tag which is obviously not an overhead a language such as Rust is willing to pay.


If your language supports implicit user-defined conversions (like C#), you could `return "foo";` from a method returning `Result<string>`.


I have extensive experience in C# as well as Java. It is bizarre to suggest that checked exceptions have failed. It is unchecked exceptions that have failed. It is the biggest flaw of C#, in fact. Why?

As an example, I wrote some very good C# code, carefully tested it, made it work flawlessly, then suddenly it started crashing. What happened? Someone made a change in a function I was calling, and it started throwing a new exception. This would have caused a compile error in Java, not a crash.

The list of recoverable exceptions that can be thrown by a method should be part of the contract. It should be written down by the programmer, and enforced by the compiler. If not, then to avoid crashing you would have to catch the root Exception class, which everyone agrees is a bad idea.

More on checked vs unchecked exceptions here: https://forum.dlang.org/thread/hxhjcchsulqejwxywfbn@forum.dl...


This is very simplistic view of the problem. This completely glosses over modularity, ABI, performance optimizations... just to name a few.

How are you going to write generic functions that take functions as arguments and re-throw the errors thrown by these functions, if you use checked exceptions? Will you require that the acceptable functions only throw exceptions that you like? -- Then your generic function is close to being worthless...

If exceptions are encoded in function's interface, then they have to be in ABI, but then you must have non-trivial types available when marshalling data between two components, so, you cannot serialize the communication using some protocol with a fixed number of types (eg. JSON and friends), because now you need to account for the infinite variety of exception types.

Because of at least these two things, what I saw happen a lot of Java / C++ projects (god blessed me with very little C# exposure) was that as soon as a developer encountered a function with checked exceptions, a wrapper was written which changed the type into a runtime exception. This is so because exceptions are supposed to be handled separately, and often the author of the function has no idea how they need to be handled -- so they want to concentrate on the main goal of the function. Once functions grow into garlands of try-catch-catch-catch-...catch the focus is lost. It becomes very hard to understand why the function was written in the first place, because the error handling takes over every other concern.


> Will you require that the acceptable functions only throw exceptions that you like? -- Then your generic function is close to being worthless...

Constrained generic parameters are actually super useful.

> If exceptions are encoded in function's interface, then they have to be in ABI

They already are in Itanium

> you cannot serialize the communication using some protocol with a fixed number of types (eg. JSON and friends), because now you need to account for the infinite variety of exception types.

No, the only exception that arises is ser/deserialization error. You can also trivially represent an error in JSON using an object, which is exactly what JSON RPC protocols do. It's also never safe to throw across an FFI boundary and similarly nonsensical to throw across a serialization boundary, so I'm not sure why you'd care.


> Constrained generic parameters are actually super useful.

Then you missed the main point: if you require the generic function to accomodate all sorts of kinds of exceptions raised by the functions it may call, this requirements now propagates to every function it may call. So, you end up implementing functions with unnecessary "throws" because they might be used in a generic function which had to add that clause because of some other completely unrelated function.

Alternatively, you aren't writing generic code at all, you just write two implementations which happen to have similar names.


> How are you going to write generic functions that take functions as arguments and re-throw the errors thrown by these functions, if you use checked exceptions?

I've actually done this with an interface type parameter used in a throws clause in Java before, but I'm not sure how it interacts with module boundaries, it's kind of verbose, and it certainly didn't seem to be common practice. But it did work: the HOF-ish method that took a parameterized ThingFrobber<E> and used it to frob things unchecked would compile only if it also threw E, and the method passing the HOF a ThingFrobber (I think via lambda syntax) could catch the associated concrete checked exception type to sink it. IIRC I was using it to allow a checked early exit from a complex iteration, and it even propagated a slightly complex bound with multiple checked exception types more smoothly than I'd expected it to.


Right, I was thinking that this is the answer to the supposedly intractable problem posed by the original article.

Just make your ExampleInterface generic -- ExampleInterface<T>.foo() throws T.

That technique works really well, and I don’t understand why it isn’t more widely used (or at least it wasn’t when I last used Java for serious work).


> write generic functions that take functions as arguments and re-throw the errors thrown by these functions

There is a philosophy that applies here: simple things should be simple, complex things should be possible. The scenario you're mentioning is not common enough that the language design should be centered around it.


It's a very common pattern, almost every modern language has a "map" function. The map function can throw a superset of the exceptions that its argument can throw. If checked exceptions can't deal with this, they'll be of limited use.


> The map function can throw a superset of the exceptions that its argument can throw.

Wait. Why should a map care about what exceptions it's arguments can throw?

A map is storing a thingit. A thingit should exist independently before it gets placed into a map. Placing a thingit into a map should not invoke anything on the thingit. The only exceptions coming back from attempting to place a thingit into a map should be exceptions caused by the map.

What am I missing?

Obviously, there are maps that conflate themselves and do things like take ownership when an object is placed into the map. But that's not the general case and presumably you wrote the map specifically with that in mind.


You're thinking of a finite map, also called a hash table or a dictionary.

The map function takes a function and a list and applies the function to every element of the list:

  map(double,[1,2,3]) = [2,4,6]
Grandparent could've used the for loop rather than the map function to make his point:

  for i in [1,2,3]:
      print i * 2
-- because in general instead of the i * 2 we might have a call to a function that might raise an exception.

ADDED. That is wrong: the for loop would not a good example at all because it is not customary to declare the type of a for loop or to need to declare which exceptions a `for` loop might throw.


Whoops. Yeah, I wasn't thinking about map, fold, accumulate, etc. Thanks for the correction.


In the case of map function hopefully you're using it with methods that don't fail in serious ways, and don't need strong error recovery. If so Java has RuntimeException to handle that case. If serious errors are possible and strong error recovery is needed, then you need to avoid the conveniences offered by functional style programming.


The problem is that if some code you call throws a checked exception (InterruptedException being an extremely common culprit) then you must wrap... and suddently nobody calling YOUR code can catch that InterruptedException reliably because it's now a SomeException (doesn't even have to be RuntimeException specifically) with an added "suppressed" exception that you now have to check for.

... so the basic "catch" syntax starts to fall apart because now you have to catch everything and resort to stuff like Guava's Throwables helpers.

It's madness.

The problem ultimately is variance: Methods are covariant, but throws clauses must be contravariant.

There are ways to solve this but "checked exceptions" (as in Java) are not the right way. Ask anyone who's worked in Scala on the JVM which they prefer and you'll have your answer.


It's possible to use a type parameter in a throws clause in Java, last I checked (as I also mentioned in more detail above). But it doesn't seem to be common practice, so interop is still a disaster.


That only works up to one checked exception type, because Java doesn't have anonymous union types.

It really is a lose-lose unless Java grows a more powerful system around this.


Sorry. This is nonsense. map() is the bread and butter of any program. Besides, you cannot ever decide whether an exception is important or not -- it's always in the purview of the user.

Re-throwing a non-checked exception is what I described as the usual / typical coping mechanism in languages with checked exceptions. Which is obviously a way to negate the whole feature.


Sorry, disagree. map() didn't even exist in Java until recently. It is convenience at the cost of some safety. If you're writing a non-critical or throw away code you may not care about strong guarantees. Personally I prefer strong guarantees over convenience. I would only use map() for things that can't throw checked exceptions. A for loop isn't that hard to write.


> Someone made a change in a function I was calling, and it started throwing a new exception.

If it has nothing to do with your code, why not either let it bubble up the stack or try/finally it to do whatever clean-up you need to do before re-throwing it and letting it move on to a global logger or similar.

If it is something you care about you can always catch the specific exception type and handle it.

Maybe whoever worked on the other function wasn't even aware it might throw that particular exception.

Granted I am far more experienced in C# than Java, but the idea that with every code change you potentially have to review and declare every single type of exception seems like madness ("what happens if we pull the cord when it's here, what type will that throw?").


Imagine that the functionality implemented by your method is very important. This functionality should not be needlessly aborted. If so you want to be aware of errors you can recover from, correct? This is why checked exceptions are helpful. It gives you a guaranteed-by-compiler list of exceptions and you can decide which of those you should recover from.


Or I can catch any exception out of the codepath I'm concerned about, examine it to see if it's one of the known set of exceptions I intend to handle, and then either do so or rethrow it.

If there's a benefit to compile-time exception checking over this method, I have to admit I don't see it. But I've also never worked deeply enough with Java to be familiar with the nuances of its exception handling, so that may be why.


The problem is that handling arbitrary errors can be very complex. It's easy to make errors in the error handling code. Especially if the error you are handling never occurs -- because there's no way to test error handling code for an error that never happens.

That's why it is really useful to know when a call can fail, and what kinds of errors you have to expect.

If you always have to add a generic error handler for possible unknown errors, then you'll write a lot of untested, dead code.


> Especially if the error you are handling never occurs -- because there's no way to test error handling code for an error that never happens.

Sure there is. Find the way it happens, do that, and test that the handler does what it's supposed to. Or, if you can't do that, mock the error.

I suppose that's not guaranteed to be straightforward in Java; it's been a long time but I seem to recall libraries being provided in compiled form, without sources, and it therefore not always being possible to identify the conditions that need to be set up in a unit test. If I recall that correctly, it makes me very glad I no longer use the language.

> If you always have to add a generic error handler for possible unknown errors, then you'll write a lot of untested, dead code.

The generic handler in this case is

    else {
      throw error; // rethrow
    }
You're not wrong that there's value in knowing what exceptions can be thrown at a given point in code. But, as I mentioned in another comment, that's something that can in the general case be known through static analysis, and should. In fact, the Java compiler must already be doing something substantially similar to that analysis for checked exceptions to work at all! Looking for throw statements rather than "throws" declarations seems like it shouldn't be that much more difficult, and I think the burden on Java developers could be substantially lightened thereby in that "throws" declarations would no longer be required.


Checked exceptions give you, essentially, syntactic sugar for handling just a few kinds of exceptions and re-throwing the rest.

It's useful when there are one or two error cases you want to retry or handle specially, but you want to just barf any other error up the stack. It's a specific use case but it's prevalent.

The downside is that sugar can only separate your error conditions by Java type. If everything is just an Exception, you'll have to use sort out your error cases in code.


By the sound of it, that's more like syntactic salt, or maybe syntactic thallium - `else { throw error; }` seems much simpler by comparison.

Being able to know what exceptions can possibly be thrown at a given point is useful, but seems like a problem better solved through static analysis than by requiring annotations.


Yes. But it’s not as simple as checked=good, unchecked=bad and vice versa.


This is one of the main reasons why I personally try to avoid exceptions and instead encode failure into the type system. For example, returning a Result<'t,string> rather than just a 't. (You can refine the type further). Altough, in most languages that approach results in a lot of boilerplate.

In F# it works quite well, though.


How is this different in practice? Checked exceptions are simply "syntactic sugar" for error types (exception objects with a few special handlers to return them and receive them).

You can write almost exactly the same code with your Result by ignoring the error until the point you would catch it with exceptions.

The only difference is that exceptions bubble up if unhandled, which requires a generic catch to match code ignoring errors otherwise.


> How is this different in practice?

It is quite a bit more explicit in that something can have an error and in what ways it can error (if you enrich the type in that way), while still being ergonomical. And it reserves exceptions for the truly unexpected/exceptional cases.


Are we comparing the same things? Adding exceptions to the method signature is pretty explicit to me too.

It's also pretty ergonomical depending on how you generally treat errors you don't care about.

I do find your point about exceptions being for unexpected things interesting: basically, split errors into two classes and use separate mechanisms for each.

I generally feel that this differentiation would never hold between different codebases (an HTTP library may consider network error something "normal", but users of that library might consider it exceptional), but it's surely an interesting concept.

I also think that we as developers are pretty bad at handling both expected and unexpected errors, so having two mechanisms will only make us less likely to do so (not by much, though).

FWIW, my take is that they are largely equivalent except for more or less syntactic support for the approach.


> FWIW, my take is that they are largely equivalent except for more or less syntactic support for the approach.

Yeah, as with lambda calculus and turing machines, they are equivalent in capacity.

> Are we comparing the same things? Adding exceptions to the method signature is pretty explicit to me too.

Maybe, maybe not. IME the IDE support for exceptions is quite a bit worse than for returning results and such.

> I do find your point about exceptions being for unexpected things interesting: basically, split errors into two classes and use separate mechanisms for each.

> I generally feel that this differentiation would never hold between different codebases (an HTTP library may consider network error something "normal", but users of that library might consider it exceptional), but it's surely an interesting concept.

Isn't the two ways to handle it already a thing with checked and unchecked exceptions, though? As in having wrappers that convert one to the other.

> I also think that we as developers are pretty bad at handling both expected and unexpected errors, so having two mechanisms will only make us less likely to do so (not by much, though).

I don't quite agree with that, as with encoding it in the type system conveys more intentionality than with exceptions, IMO.


> I don't quite agree with that, as with encoding it in the type system conveys more intentionality than with exceptions, IMO.

Sure, but why keep using exceptions then at all? I am fine with returning errors making them explicit, but I would avoid exceptions everywhere (Go-style).


Because of interop necessity, mostly (As with f# which has exceptions because .net and c#).


I stopped using Java a long time ago, and so I assume the language has gotten better since then, but early on at least it felt like Java almost took pride in making the developer jump through extra hoops. Compared to many other languages, using Java just made me feel tired.

Checked exceptions - a feature that seems to be a cost to the developer 100% of the time while being a benefit far less than 1% of the time - is a quintessential example of Java's tendency to cause developer fatigue.


Some people say the same about strong typing. Like, why do I have to write down the type of every single parameter or variable? Java is making me jump through hoops!

The point is, if you don't need the rigor of a strongly typed compiled language, there are other languages you can use. Perhaps a bash script is all you need.


You are very confused... "strong" in strong typing doesn't mean you have to write much or at all. Actually, it doesn't mean anything really. But, let's say, Haskell is probably at least as "strongly typed" as Java -- at least that's how most people understand that wording. And you don't have to write types in Haskell at all. It will be a nightmare (as if Haskell can be anything else, but even by the very low Haskell standards, it would still be a nightmare) code, but it will "work".

Until very recently, Java was just tedious, repetitive, high-entropy language, where you had to write everything multiple times.

Eg.:

    VeryLong<Type> variable = VeryLong<Type>();
In this expression, it's obvious what would be the most likely type of variable, but you still have to write it twice -- which is just worthless use of your time, especially when trying to read this very repetitive mess, and very easy to make a typo.

Recently, Java tried to improve this situation by allowing to omit types when the type inferred by default is the one you want. So, you need to write less. You also need to learn the inference rules, obviously, so it increases the cognitive load and rises expectations towards programmer's skill a bit. But, I think it's fair, since Java from a language for the brain-dead transformed into a language that requires more effort to master. While I'm not sure it's a positive change... this decision by Java developers kind of flies in the face of your argument:

No, it's not necessary to repeatedly spell out what you want your program to do. It's just boring and contributes nothing to the program's correctness. If anything, it only causes the programmer to lose focus and introduces mechanical errors that would've been totally avoidable, had the language was more terse.


Type inferencing can get you in trouble quickly. Consider this code:

   var fireable = someMethod();
   firable.fire();
The programmer intended this code to fire an employee. Let's say this code is in a military application, and another programmer modified the someMethod() function to return a missile. As long as the missile object has a fire() method this code will compile just fine... and do something the code didn't intend to do.

How likely are you to have employee firing and missile firing in the same program? Not very likely, but the principle is valid, regardless. You need to express your intention more clearly like this:

   Employee fireable = someMethod();
Now if someMethod() is modified to return a missile you get a compilation error, and the world will be a lot safer. You don't want missiles being fired by accident!


This is a problem of dispatch. Not so much of type inference... The problem with dispatch says that if a method gets overloaded too much and too often, then programmers tend to make mistakes when they assume how the method they call will work.

You should be insane if your program accidentally confuses people with missiles, but I understand that this is a stretch to amplify the point. What tends to happen in practice is that eg. the same method which used to only read something from the local storage gets a new overload that goes out into network, and brings with it a whole new bunch of problems the calling code wasn't prepared to handle. Or, when a string was passed to the method that used to do something simple, and then the method gets overloaded with a more sophisticated method that interprets substrings in the passed string as commands to run some code -- and then you get security problems.

Have you ever seen the IntelliJ interface when it deals with Rust code? I'm not sure if it does the same for other languages with type inference feature. Anyways, the idea here is that the programmer can toggle the display of inferred types to qualify every expression. So, it's really trivial to make sure you aren't calling fire() of a missile instead of an employee. And, in practice, this doesn't lead to problems. This is so because dispatch, essentially, blurs the difference between multiple different objects, while type inference is just a kind of abbreviation. It may cause confusion, but it will be of a different type: the reader might not know what's being abbreviated, whereas with dispatch, the reader may be convinced they know what the author meant, but still be wrong.


> the programmer can toggle the display of inferred types to qualify every expression

That's a terrible solution. Eyeballing is not as reliable as compiler doing a check. The language should provide a way for the programmer to express intent, and the compiler should do the check.

You can express intent more clearly when you say:

    Employee foo = someMethod();
as opposed to:

    var foo = someMethod();
A real-life example: I once helped a kid with her Python program to sort numbers. I looked at the code and the algorithm seemed correctly implemented, but the output was incorrect. The bug turned out to be that she hadn't called int() to convert the user input to integer, so all of the data passed through the entire program as strings, and got sorted as strings. There was nowhere in the code where the programmer could express intent that this is supposed to be integers. This sort of thing would never have happened in Java. Well, unless you misuse 'var'.


This is kind of a ridiculous thing to worry or think about. I’m seeing a lot of type inference in modern Java, and it’s the default way of writing Kotlin.


It is dangerous to call a method on an object whose type is not being checked by the compiler. Whether that's a ridiculous thing to worry about depends on how important your program is. If you work for NASA and your code will run inside the Mars rover, it is not ridiculous to worry about such things, but if you're writing some code that will be used once and thrown away then yeah, it might be ridiculous.


It is being checked by the compiler though. For your example to work both would need to extend a common object (unless the only thing you ever do to that object is call fire(), which is nonsensical) or you would get a compile time error right off the bat.

It's a ridiculous thing to worry about in either of your cases. That incredibly specific situation simply isn't ever going to happen in practice because so many other conditions need to be met for the compiler to let you proceed.


> For your example to work both would need to extend a common object

Not necessarily at all.

> unless the only thing you ever do to that object is call fire(), which is nonsensical

In this example, when the object is returned by a particular method, only fire() is being called, but other ways of obtaining the object can have more methods being called.

> It's a ridiculous thing to worry about in either of your cases.

Not at all... if your program is running inside a NASA rover, or if your program is running inside a robotic surgery machine you have to worry about safety and you want to maximize compile-time checks.


someMethod() could return an implementation of Employee that overloads the fire method to dispatch missiles.

The point being, someMethod is doing some job that you want it to do. It is incredibly unlikely to simultaneously start doing a completely different job, while also returning something of the same shape.


> an implementation of Employee that overloads the fire method to dispatch missiles

If you intentionally break it, that's on you.

> It is incredibly unlikely to simultaneously start doing a completely different job, while also returning something of the same shape.

But it doesn't need to return something of the same shape! The shape is not being checked. All the compiler checks is for the existence of a single method of the same name. You call that type checking?


> > an implementation of Employee that overloads the fire method to dispatch missiles

> If you intentionally break it, that's on you.

The shape is not being checked. All the compiler checks is for the existence of a single class of the same name. You call that type checking?


F# is strongly typed as well, and I almost never have to write type annotations. Hindley milner type inference is pretty useful. And my IDE of choice also shows the inferred types for easy checking. Altough, without that, it'd be a PITA, as you said about haskell.


> Until very recently, Java was just tedious, repetitive, high-entropy language, where you had to write everything multiple times.

More than half a decade ago.


Which means today's code.


In strong typing you don't have to (See type inference). It just means that the shape of the data/object you receive is known to you.

That a lot of languages force you to write it out is not an inherent property of strong typing.


If checked exceptions were implemented in such a way that they were inferred, and they weren't erased in the byte code, much like how people use strong types now, they wouldn't be seen as hoops. Meaning, if you didn't explicitly handle a checked exception, it would automatically bubble up yet be visible to your IDE.

I would love if, at design time, all the possible exceptions that could happen were able to be inferred. But, the way it happens in java, it's entirely manual, and it won't catch things where I'm referencing compiled code. The design of checked exceptions in Java lead to all kinds of exception anti-patterns becoming common, just to shut up javac.


Exactly. I have no problem with a function declaring the sort of exceptions it can handle, although simply handling them when they occur is normally good enough.


Java is the only mainstream language with checked exceptions and checked exceptions are nowhere near usefulness of static typing.

Following your logic, Java developers can now say “if you don’t like freedom of Java, use Rust/Haskell/Scala to validate everything at compile time”


Checked exceptions are part and parcel of a type system, though. If you have a foo function that returns a value or error you've got different options. Pseudo code they are:

foo():boolean throws Some, List, Of, Errors

foo():boolean | Some | List | Of | Errors

foo():(boolean | nil) , (nil | Some | List | Of | Errors)

The first is checked exceptions. The second is returning different types. Typically there's some sort of Option wrapper for ergonomics. The third returns a result, err tuple and by checking if err is nil you can see if foo succeeded.

Ultimately they are all the same. What differs is the boilerplate/syntax to accomplish what you want. If what you want is a generic error to handle unknown events then you gotta write it that way no matter which system you use.


You could say Rust error handling is more like checked than unchecked exceptions.


It is, that’s the whole point.


Java is not the only language that has checked exceptions. C++ has them too, for example, but thanks god virtually nobody uses them in C++.


Checked exceptions (i.e. "dynamic exception specification") were deprecated in C++11 and removed in C++17.


C++ never had checked exceptions. It had dynamic exception specifications but they were a very different (and useless) thing as they were runtime checked not statically.

I wish C++ had checked exceptions.


> The point is, if you don't need the rigor of a strongly typed compiled language

This has nothing to do with strongly typed compiled languages vs alternatives.


Perhaps a bash script is all you need.

Brutal, even for this site


Java almost took pride in making the developer jump through extra hoops

That's exactly it. A holier-than-thou stance, reprised by most comments in this post.

Checked exceptions were a PITA.

Someone has recommended to use another language if you didn't like it. Unfortunately my employer at that time had entered a partnership with Sun that prevented that :)

Another ideological prohibition: no pointers. Well, how do you assign event handlers then? People were extremely confused with anonymous classes. I got the reason instantly.


> Checked exceptions - a feature that seems to be a cost to the developer 100% of the time

That's because exceptions are overused for not-truly-exceptional conditions. Languages that throw exceptions when a dictionary does not have a key you're looking for are doing it wrong.


> The list of recoverable exceptions that can be thrown by a method should be part of the contract.

If it’s really recoverable, then should it actually be an exception? How often do you see exceptions that are recoverable?

> If not, then to avoid crashing you would have to catch the root Exception class, which everyone agrees is a bad idea.

I disagree that it’s a bad idea, having a catch-all exception handler to avoid crashing the entire app is a good idea.

Java also has RuntimeException, and with it, a whole host of exceptions that may occur but are not declared. Should every function that uses the division operator be marked with `throws ArithmeticException` since it may end up being a division by zero? That would quickly become absurd.


If it’s really recoverable, then should it actually be an exception?

That's the problem with exceptions in general. They're a hammer that makes everything look like a nail. People start using them for all kinds of control flow situations because they're more convenient than having to deal with a lack of type system support for optional values, etc.

You get to the point where you have a parser that is expecting a digit and it encounters an alphabetic character so it throws an exception!


There are remarkably few situations involving recoverable errors, though, and they almost all look like "not found"--which includes your parse integer example--and, even then, most of the time you will want a forceful automatic exception variant and not a "recoverable" error value as there is almost never anything to do instead: try-parse is so rare of a thing that is correct to do it warrants having a quick "try" attached to the expression.

In contrast, there are a billion ways in which almost every line of code can fail--even stuff people are super sure could never fail is still usually at least subject to stack exhaustion--and almost none of these failures are things you should locally "handle"; and yet in many--even most--cases these are situations you can still recover from at a higher level if you don't screw up your own logic (which is not the point of these errors).


A parser failing is something I'd expect to be recoverable. And I'd expect it to return where it failed and why. You can do that without throwing exceptions at all, and instead encode it in the type. Like returning a result<parsedObject,failureReason>. (Can be further refined, like adding the already parsed/yet to be parsed string to the failurereason as a value) I WANT to be forced to handle it right there and then rather than potentially forgetting a try catch, or checking the boolean returned if it was successful or checking the out parameter if it is not null (boolean tryparse(out parsedObject) is something I see commonly)

For me it is much more ergonomic to use a

  match tryparse somestring with
  | Ok obj -> proceedFurther obj
  | Error reasons -> handleerror reasons
For truly exceptional cases you don't expect to be recoverable? Yeah, throw an exception.


It may not be recoverable locally, but the caller may know how.

Consider Files.createFile. It throws if the target already exists. That method has no way of knowing what the recovery is, however, the caller might.

It could be that the caller wants to explode, but maybe its part of some kind of singleton launch where the recovery is to pass some piece of data off to whoever created the file originally.

Maybe this is a bad example since there’s not much interesting in the return of Files. But it’s exceptional in that it’s an abort rather than a success.

That said, checked exceptions are still trash because they don’t work across thread boundaries or with futures. ExecutionException is the catch all baked into the design because they needed something. The idea could have been good though because naming error handling visible is useful. Java just has the misfortune of designing early before the kinks had been worked out and now we get nice stuff like Rust has.


Yes, in Java’s parlance an Exception is a recoverable code path, Error is the kind that is not.

E.g. a network call failing due to some IOException can be easily retried, that’s a proper error handling.


How are you supposed to recover from IllegalArgumentException or NullPointerException?


Recoverable, as a technical term. You can catch an NPE and follow it with any kind of code you wish, it will be correctly “handled” by the VM. (The semantics of course depend on the exact specifics, but returning a server 500 error and not failing the process for example in a web server is completely valid).

You can try catching an OutOfMemory Error, but it is not guaranteed that your handling code can successfully run. Hence, non-recoverable.


Catching the root exception is not a bad idea. If you had caught the new exception what would you have done with it?

Most of the time, it’s either catch the exception, log it in a central logging system keep moving and have a central alerting system or catch the exception log it and crash the program.


> Catching the root exception is not a bad idea. If you had caught the new exception what would you have done with it?

Something relevant to the error condition, probably?

There may be some cases where the total set of possible errors is too large to meaningfully handle every one of them specifically (for instance if you call a high level GPU initialization routine that can fail in a myriad of ways) but that's not true in all cases. Maybe the new error was in fact recoverable, or maybe it calls for some more specific diagnostics.

At any rate I completely agree with the parent that changing the set throwable exceptions should be considered a breaking API change and should be enforced by the compiler. If an API method wants to future-proof and be able to throw anything, it can declare just that and everybody will know to expect the unexpected.

That's why I vastly prefer Rust's Result<> system which does most of what exceptions do and with fairly similar ergonomics but with normal return values and all the type checking that it involves.


> Something relevant to the error condition, probably?

And in every language I know you can have catch blocks for specific errors and then have a generic catch all.


The ability to catch exceptions is not the issue, it's knowing what to do once they're caught that's the problem.

The parent apparently had exhaustive exception handling, catching all cases. A new exception is now generated, probably signaling a new error condition, you can't expect the calling code to be able to handle it gracefully.

Hence why a compile error might be a better solution here, the coder could decide whether the new exception can fit an existing handler or requires special handling.


Realistically speaking, you should essentially never try to "handle" errors with tricky program logic: propagate them up (automatic in languages with exceptions) and eventually--in as few places as possible--report them to the user so the user can decide what to do, not the code.


I disagree, low level exceptions rarely make sense on their on for the users, the automatic propagation of exceptions just promotes laziness by having a catchall "print(exception); exit(1);" at the top level.

If you're lucky you get a proper stack trace that lets you figure out what's going on, but even that is poor ergonomics.

A high level library should report high level exceptions for instance, not minute details of the lower layers. Nothing is more frustrating that having a program fail and all you have is some cryptic "No such file or directory" or "Operation not permitted" error that doesn't tell you what the code was actually trying to do or give you any hint on how to fix it.


It is a bad idea according to Microsoft.

See https://learn.microsoft.com/en-us/dotnet/fundamentals/code-a...


This is for a library. Whatever the “root” of your code is should have a generic catch all where you do “something”, you don’t want library code (either yours or the system) to decide for the user what should be done with exceptions.

As a library writer, you should do something about exceptions that you can and rethrow the ones you can’t,


> Whatever the “root” of your code is should have a generic catch all where you do “something”

If your only goal is to avoid crashing the app then you can catch the base Exception class at the root of you code and go home. But consider the scenario that you're writing a very important method, where the functionality of your method is very important. If your method does not do its job, the rocket may crash or the patient may die. In this case you want to know what recoverable errors are possible in the functions that your method calls, correct? Wouldn't be it nice to get a guaranteed-by-compiler list of possible, recoverable, errors?


And then you have a list of catch blocks with the final one being the base.


No. You only catch exceptions you can recover from in your method. Other exceptions should be handled in higher layers.


I’m talking about the “root” of your app


Catching at the root is only half of story. You must also have ability to resume where it was thrown.


> Someone made a change in a function I was calling, and it started throwing a new exception. This would have caused a compile error in Java, not a crash.

I call BS here. I've worked in a few Java projects, and in every single one, the people changing the method in question would have thrown RuntimeException to stop the compile errors.

If RuntimeException was checked, you might have a point, but given that there's a class of unchecked exceptions, you're relying on discipline instead of the compiler anyways.


You need to help train your coworkers to write better code :) Point this thread out to them.

Seriously, Java doesn't force bad programmers to write good code. It just enables good programmers to write good code.


That is not better code. Couple of try-catch-catch-catch-...-catch cases in a function that would otherwise be 2-3 lines long makes for an awful code.

Also, imagine a situation when an iterator throws an exception. Or you wanted to write f(g()) but now you can't and have to do:

    try:
        T t = g();
    catch E1:
        ...
    catch En:
        ...
    f(t);
Now it takes a much greater effort for the reader to figure out what's going on. It's also tempting to allow some exception cases to succeed so, f(t) is reached even if there was an error. Because these two code pieces are now far apart, it's possible that later edits will introduce bugs because the programmer didn't see that either f(t) is reachable even if g() failed, or accidentally made it reachable when it shouldn't have been.

Bottom line, it makes human errors more likely.


This is only because you use exceptions incorrectly. You can (and should) write:

try {

  final var fResult = f(g());

  //do something with fResult 
}

catch (E1 e) {...}

catch (En e) {...}

That's the main idea of exceptions in all languages: main flow is kept together and exceptional flows are separate.


I think there’s a strong assumption in this pattern that can and should be handled immediately and that there is a recovery path from the failure, if there is no recovery path and you’re just propagating the error this pattern becomes an awfully verbose return statement.


When structured exception handling was becoming popular (C++ in nineties?) the idea was that immediate error handling (like when you have to check return values for errors) makes the main flow (happy path) blurred and unreadable.

You cannot have both at the same time, I'm afraid.


I think the issue with this pattern is that either we don't care about went wrong, we want to treat all errors the same way, and then having n catch clauses is overkill. Or we do care about went wrong, and in that case the exception type is not enough to identify the issue, as often multiple lines in the try block will throw the same exception type.


Yes - that's one of the issues with structured exception handling. Relying only on exception type itself is not enough as information is lost (ie. the actual source of exception)


Crazy idea: Maybe it would be helpful if we could label operations, and catch by either label or type or (label, type). The we could centralize error handling without losing anything.


While deceptive at first... Just try to imagine how "extract method" refactoring would work...


It would have to do some clever rewriting of the logic or it could simply declare that extracting method isn't available on that region. The simplest way to rewrite the logic may be the introduction of new exception type(s) corresponding to the labels.


They also have to throw RuntimeException if they want to use APIs that accept functions and therefor specify which checked exceptions those functions are allowed to throw, like Streams.


I will acknowledge that this wasn't the best code (or the best programmers).

I will also cede the point that you can write good code in Java.

But my point was that Java wouldn't have enforced the situation to be a compile error, not that you can't write good code in Java. I maintain this point, and I think it's true for the average java project.

If you're getting these benefits from Java, it's the intersection of Java and your team's or your organization's culture, not purely from Java. It's possible for you to gain this benefit and for checked exceptions to have still failed out in the wider world of the average Java project.


And a lack of checked exceptions means good programmers can't write good code? So good code may only be written in Java?

No... I don't think so.

Checked Exceptions are at best, just a 'hey, are you sure that's right thing to do here?' and at worst, an additional source of rigidity in your code that blows up the scale of a minor change.


> And a lack of checked exceptions means good programmers can't write good code?

Right, and that was my point at the top post in this thread.


Wait, you seriously hold that position!? As in, only Java permits good code?


Each language has its strengths and weaknesses. In the case of exceptions Java gets it right, C# doesn't. There are other aspects that C# does better than Java.


“I wrote some very good C# code“

We all do.. its always that other guy who is at fault :)


Exceptions are great when they don’t need to be caught. Otherwise they are a pain.


Each 'leaf' statement of language should have a definite and finite list of exceptions that it can throw.

Each non-leaf statement is a composition of leaf statements. Therefore every exception can be determined, and that's our checked exceptions would work; work at the exceptions and check they match the list.

Therefore even without checked exceptions the compiler could provide the exceptions you want and you can check them or not if you please, so get the best of both worlds. All without forcing people to give an explicit list of exceptions everywhere. Forcing checked exceptions is not going to be popular, and much of the time is counter-productive.

Also someone broke your Liskof Substitution Principle which may be your bigger problem.


>you would have to catch the root Exception class, which everyone agrees is a bad idea.

It's a bad idea to catch an unexpected exception?


The issue brought up by the parent is that the exception needed not be unexpected. If the compiler had generated an error because a new exception type could be thrown by the third party API, the developer could have decided that they either couldn't do anything with it and just add a catchall, or actually implement code specifically to handle this issue.

IMO the entire concept of "unexpected exception" is bogus in and of itself. That's like calling "exit(1)" in your code when something goes wrong instead of using proper error handling facilities.

Admittedly it's very convenient for small scripts where proper error handling might be overkill, but for any serious application it's a massive footgun IMO.


You run the risk of swallowing something interesting. This was always the problem with C# exceptions to my mind: there was just one mechanism for reporting errors, which covered everything from the most non-erroneous of events, that aren't even errors, such as file not found or invalid file name encoding or failed to write data to file or socket error during write, to stuff that absolutely should cause the program to go pop and die immediately, such as a null pointer or divide by zero.

Anyway, I sympathize with this thread's OP, as I've had exactly the same problem with C#. In the end, I came to the conclusion you kind of do often have to catch every exception, because you can't trust the functions you call, and the documented exception lists for framework stuff are not always accurate. Safest is to put each call in its own try...catch block, and do any property accesses outside the try...catch block so you find out about the NullReferenceExceptions.

This is annoyingly verbose though.

It also does assume your setters and getters don't throw anything.

I mostly liked C#, but the exceptions aspect is not good. Similarly, I've generally despised working with Go, but the way it deals with errors is not its worst feature! (You do have to pay attention to the linter output though! It's the same err variable every time, so the compiler's fastidious checks for variable use are always foiled...)


> the most non-erroneous of events, that aren't even errors...

You're applying a value judgement that the language makers can't and shouldn't make for you. Whether the error is innocuous or malicious, the happy path of your code cannot proceed and needs explicit handling by the programmer.


No value judgement here, I don't think, at least not from me (though of course my values inform every opinion I have, including this one). I simply claim that every path is equivalent, and all must be considered. I claim there is no so-called "happy path" - a term I reject - nor do "errors" really exist.

There is one case where the socket send succeeded, and another where it failed. There is one case where the file name encoding is valid, and another where it is not. And so on. It makes no sense, in my view, for one case to be handled with one language mechanism and the other case to be handled with another.

By comparison, there is (or so I claim!) no valid path where a null pointer is dereferenced, or where a value is divided by zero. So I will accept the requirement for some other mechanism for dealing with these - though by the time the code is released to the wild, I would hope that all such occurrences would have been eliminated.


The correct behavior for not just the vast majority of these conditions but virtually all of them is the same: propagate the error. You aren't going to sit around and try to "recover" from failing to send a packet or encode a filename: you just report it to higher level code. There might be something you can do at a higher level--such as reconnecting or using a different server from a load balancing list--but at that point the exact reason is irrelevant (and yet should be maintained in case the higher level code needs to inform the user in either a dialog or a log file): failure is, virtually all of the time, boolean in nature and a non-local phenomenon.


My opinion was informed by writing the higher levels rather than the lower ones! Yes, the code absolutely is going to try to recover from sending a packet, because it was sending that in service of some larger goal that now needs unwinding partway through. And, yes, it is going to try to reconnect, or use a different server - or whatever.

The thing it really doesn't want to do is punt the issue on to some higher level. Because there isn't one.

But, equally, you don't want to be swallowing interesting problems that genuinely indicate actual bugs. Those, you do want to pass on (and due to the lack of any higher level, your process will be killed, and some mechanism will spring into action to produce a report).


Why not catch System.Exception also? Or just catch


When you catch the root Exception class in C# you end up catching IndexOutOfRangeException as well. You should let the program crash instead because this happened due to a bug in your program. Continuing as if nothing happened is unsafe.

Another issue is that you are not allowing higher layers see the exception, even though they may have logic for recovering from the exception. To see the exception they have to now fix your code by removing the catch-all.


Catch the root Exception class, show an error, but let the user continue working if possible. For a web app, there already is a catch-all exception handler at the request level that prevents the entire server from crashing. For other environments, having a catch-all handler at some top-level interaction point (i.e. on buttons or menu items) would be a good choice too.


The “if it’s possible” is a major part of the reason to not catch the root exception.

Catching typed exceptions give you much more easily parsable details about whether or not “it’s possible” to recover.

It’s bad practice to catch the root exception. More often than not, it points to an inexperienced developer


If the exception that was thrown (by a method you called) is in fact a condition your code can recover from, then the functionality of your code was needlessly aborted.


I meant at the higher layers, minus some potential cleaning, a lot of library code is exception neutral and shouldn’t care


"Nobody uses [checked exceptions" in Java, this casually claims.

Oh dear. I have a slew of current counter examples on my machine, in GitHub, etc.

Either carelessly untrue or else annoying hypebole.

Made the rest very hard for me to read.


> Can I specify that implementations of foo can throw no, some, or all exceptions? What would it even mean to write something like throws *? Analogously, if I have a function that takes a method as an argument, like a callback, how do I specify what set of exceptions in can throw? Can I have generic “exception set parameters”?

> Concretely, checked exceptions in Java failed because Java lacks “throwingness polymorphism”, if you will.

I don't understand. Every "missing feature" the author asks for is, in fact, present in Java. The example below just about sums it up.

  @FunctionalInterface
  interface MyCallback<X, Y, E extends Exception> {

    Y apply (X x) throws E;
    
  }
  
  class A {

    <X, Y, E extends Exception> Y runCallback (MyCallback<X, Y, E> func, X x) throws E {
      return func.apply(x);
    }
    
  }


The author's point about the orthogonality of exceptions to values and types still stands; try creating a class like

    abstract class GenericException<R> extends Exception {
        R reason();
    }
As it is, Exception is a "special" type that isn't allowed to be used like normal Java types, because Java generics are a leaky abstraction introduced after checked exceptions, and the JVM cannot deal with type erasure in catch blocks.


Little aside, but I feel a lot of the drama surrounding exception could have been solved with a little syntactic sugar making their handling easier. Something along the lines of Perl's "|| die("...")" pattern would be a start (i.e. add some context and rethrow).

In C++ I find it quite infuriating that try{} opens up a new lexical scope, which means you can't construct something, check for errors and move on, since the act of adding a try{} means you'd call the destructor of the thing you just constructed. Meanwhile surrounding a whole block of code with a try{} means you no longer can tell where the exception came from.


> In C++ I find it quite infuriating that try{} opens up a new lexical scope, which means you can't construct something, check for errors and move on

You technically can with a small workaround (demonstrated below), though I personally wouldn't use this approach.

    void foo() {
      auto value = [] {
        try {
          return Foo();
        } catch (...) {
          // Something here
        }
      }();
      value.do_something();
    }


Yep, I had to resort to that a few times. With a bit of syntactic sugar that could be turned into something like:

    auto value = try Foo("will fail") 
    catch(...) {
        return Foo("will work"); // or throw something else
    };


> In C++ I find it quite infuriating that try{} opens up a new lexical scope

That’s an interesting point. I’ve never considered that because I haven’t worked with classes that regularly throw in their constructors.

What situation do you have where 1) you are constructing classes that regularly throw in their constructors and 2) this is a recoverable error that 3) should be handled at the point where the class is constructed instead of some outer point?


Code like:

  QImage image("filename.png");
is pretty common. Qt solves this by not throwing an exception and using a Null image check:

  bool QImage::isNull() const
but that throws all the extra information that an Exception could provide away.


> infuriating that try{} opens up a new lexical scope

This slays me.

The try, catch, and finally blocks should be just one lexical scope.


I don't think that helps?

If, as is kind of the point of RAII, your constructors do some initialization and that initialization may fail, then:

If you have a single object that may fail to construct then maintaining it in scope for the catch block doesn't make sense; it's not initialized.

If you have several, then you still can't maintain them in scope, because you don't know which ones are actually valid.

Yes there are other patterns that wouldn't have this issue (acquiring resources outside of the constructor), but then you can just declare your objects outside of the try-catch block.


I misspoke.

Ignoring the syntax for a moment, I want scoping to behave like this pseudo-code. So that the catch and finally blocks are lexically nested within the parent try block.

  try {

    OutputStream out = ...

    ...

  // Must be at end of try block
  catch ExceptionA, Exception B { ... }

  catch Exception C { ... }

  finally { ... }
    

  } // end of try

Maybe even allow catch and finally to allow single expression in addition to blocks. Just like with if/then/else.


Right, but if constructing OutputStream throws ExceptionA, you shouldn't have it in scope; it's not something you can use.


Agreed. I'll make that more clear next time. I omitted catch's parens, which makes it less clear.

I just want 'out' visible to both catch and finally.


If constructing "out" fails but one of the catch-blocks tries to use "out", what should happen?


Python is an example of a language that works like this. When the constructor throws, 'out' remains not defined, but you can just do 'out = ...' in the catch block and define one with a fallback value. So all the code after the error handing would just see a working 'out' variable. This works due to all variables being bound to the function scope, not to the code block, in Python.


Python can do that because:

* it's not an error to have an unbound name in scope; just referencing it at runtime is an error. * there is a reasonable default; anything can be set to None

This doesn't hold in C++. If your class throws during its initialization, you can't really do much (even default initialization of members may not work). Reassignment may not be possible if it was declared const.

And there's still the issue of 'which objects successfully initialized'?


In Python, sure, but I struggle to see a way to make it make sense in C++.


As others have mentioned, an interface is a contract and any possible exception that may be thrown needs to be explicitly defined, that's why checked exceptions exists.

However, there is an exception (pun intended) to this: inline lambdas. You can use lambdas as variables, pass them as parameters, etc. And it makes sense for them to retain the interface if needed, but (at least on java) when you are doing a .map(v->parseInt(v)) checked exceptions can't escape the lambda, and that's probably the worst problem of java lambdas.

You can avoid this with custom interfaces where the exception is a generic parameter, but sadly native ones aren't defined like this (yes, exceptions that a function throws can also be defined with generics, and the compiler will automatically use the more restrictive of the lambda code! )


Yeah. The thing is, I like the idea of checked exceptions. You could go ahead and define a number of error situations or error kinds and annotate methods with these checked exceptions to force applications to handle these errors. Like, you have a Database.query function and it might throw different exceptions - ConnectionInterrupted, QueryPreparationFailed, QueryFailed, TransactionCancelled. And then you could catch specific error kinds and react differently on those - retry for interrupted connections, cancel on query errors.

This however runs into issues. One really annoying one: Checked exceptions are part of your API. If you forget an error condition and want to introduce a new exception for that, well guess what, that's a breaking API change. There is no easy way to evolve an API with checked exceptions, because changing them requires new major versions, because it prevents code from compiling.

On top, if you want to encapsulate your API properly, you end up with a lot of boilerplate. Once you're using a library, and that library has checked exceptions, you either have to base your own public API on the API of that library - meaning you can never replace it - or you have to start wrapping all manner of errors into your own checked exceptions. I've kinda done it as an experiment some time ago, but that just ends up with so many exception types and so much error wrapping it's kinda ridiculous.

And then you end up with the sad truth on top to be honest: Most error checking is rather brute and clumsy. In most cases, I just let exceptions bubble up because my intermediate function can't really do anything about it. As a distant second, you catch all errors, shove them in some kind of error reporting, reset the system and continue trucking. As a somewhat similar third, I dissect errors in CLI tools to create some useful error messages. And only them I might start caring about some specific errors, but that's pretty rare if you're just running some REST-based business logic.

All in all, it's a good idea, but the implementation results in a lot of API churn or boilerplate for something that's not used much in general.


> This however runs into issues. One really annoying one: Checked exceptions are part of your API. If you forget an error condition and want to introduce a new exception for that, well guess what, that's a breaking API change. There is no easy way to evolve an API with checked exceptions, because changing them requires new major versions, because it prevents code from compiling.

This sounds like a good thing. Throwing a new exception is a breaking change whether it's enforced by the compiler or not. Otherwise, callers of you API go from being exception safe to not being exception safe. It's strange to me that you would want to squirm around that just to avoid bumping the version number.


That's not true in java.

You can gradually increase the granularity of unchecked exceptions using subtypes. For example I might initally have AnythingFailedDuringTheQueryException with a Subtype of QueryPreparationException. If I introduce new subclasses of QueryPreparationException to increase the precision of the error reporting, all existing catches for QueryPreparationException will still function as they did before. A "NotEnoughParametersException" subclass of a QueryPreparationError is still caught as a QueryPreparationError. Only if you introduce catches for the new more granular unchecked exception, your codes behavior changes to use the new behavior.

That's a very clean way to improve error reporting in a backwards compatible way.


Checked exceptions failed in Java because they don't play well with parametric polymorphism. Union types might help (exception list declaration is actually a union type) but I don't think it would succeed anyway as it is not general enough.

Handling effects and effect polymorphism in programming languages is an active area of research and there are some new languages that try to approach the problem (ie. Koka).

Haskell has several effect libraries (effectful, cleff, eff, polysemy) that look quite nice.

Idris with its dependent types allows precise definition of effects in function signatures.


No, that's not it. The same problem is true for other error-handling types, such as the "either" or "result" types. Languages without union types or a similar mechanism are unable to define the error types adhoc and force the developer to define them in advance, which is very unergonomic.

But that is true in both cases. The problems with checked exceptions come on top, namely that you cannot use all the regular value-machinery to transform and manipulate the result of a function-call.


> No, that's not it. The same problem is true for other error-handling types, such as the "either" or "result" types. Languages without union types or a similar mechanism are unable to define the error types adhoc and force the developer to define them in advance, which is very unergonomic

We are saying the same thing, aren't we?

But it is not only about exceptions. Other effects have the same problem and you need a mechanism to define effect polymorphic functions.


Maybe we do and I can't read. :-)


> Can I specify that implementations of foo can throw no, some, or all exceptions?

just because theres the possibility of a throw in the signature, does not mean it has to be done in a concrete implementation, and if implementations are really to be exchangable, it has to be handled in case implementation changes.

> What would it even mean to write something like throws *?

Roughly similar to "throws Exception" or "throws Throwable" - that it may throw whatever you have under the sun?

> Concretely, checked exceptions in Java failed because Java lacks “throwingness polymorphism”, if you will.

it does allow for extending exceptions, so an interface can define some exceptions that are basic, and implementations can then choose to extend those to provide MORE detail if needed.

> Was Java wrong to add checked exceptions? No. They took a risk and it didn’t pay off.

Uhm.... Im gonna go ahead and use some other opinion on this


For checked exception, as you mention, "throws *" is throws Exception, then people can refine it on implementation.

As many language feature, checked exception are often misused and/or misunderstood. Too many time I have seen people blindly propagating checked exception above, and you end up having a controller method exposing a SQLException. Then for sure, it is better to use unchecked exceptions overall


I am surprised an even more obvious interface definition of

  public interface ExampleInterface {
      void foo() throws IOException, AnotherException;
  }
was not covered: is this also not supported?

That would solve a lot of use cases if inheritance is respected.


Wouldn't it still be a problem? By putting "throws [Exception List]" in the interface, the interface making assumptions about the implementation details of every potential implementation.

The interface can't know every exception that an implementation might throw, and using your example, some implementations may not use IP at all and won't throw those exceptions.

It seems like you'd end up needing to list every possible exception on "foo() " when defining the interface, and then handle every possible exception in any code that uses "ExampleInterface".

At that point, it would be better to not annotate exception information at all.


> the interface making assumptions about the implementation details of every potential implementation.

No, as you'd force the implementation to catch their implementation specific exceptions and rethrow them in the way defined by the interface.

That's really the crux of it, when I call foo(), I expect to see a FooException when something goes wrong, since that's the only one I can meaningful react to. If I get an ImplementationDetailException that I never heard from and that isn't specified in the interface, how am I supposed to react to that in a meaningful way?

If exception are supposed to be used for error handling, you have to actually report and handle them in a well specified manner, you can't just treat them as a slightly less crashy version of a SIGSEGV.


Unless the language does that for you (both enforcing it, and providing tools to make it absolutely trivial), it will never happen, it's just way too much overhead to have to define a new exception for every method in an interface, then a sub-exception for every method in the implementation of that interface, then go through the internal conversions from the underlying exceptions to the parent one.

People can't be arsed to do that in Rust where there's ways to abstract over those things and macros to define them.

If that's the route you assert is necessary, you need something like Zig's errors.


What you are saying is that people can't be arsed to do proper error checking, which is absolutely true.

Making a sane hierarchy of exceptions is the smaller of the problems of actually handling them everywhere.


I agree that, if you are working with checked exceptions, that’s certainly better than adding garbage to your interface.

But for many types of checked exception you just end up wrapping the underlying exception in a foo exception anyway.

I think you are right: if you are using exceptions for error handling, of errors you expect people to actually deal with, this is currently a reasonable (if painful) option to ensure exhaustive analysis of returns — which most people agree is a good thing these days I think, what with mainstream popularization of options etc.


Exceptions should absolutely be a part of an interface and it stops you from leaking implementation details. If I have an IRepository interface then none of the methods should ever throw SqlException, HttpException, or anything like that. You'd map them to something else. Exceptions in most languages are implicitly part of the contract, and not having consistent ones on an interface breaks Liskov's Substitution Principle.


As others have pointed out, my example would have been better if it said `ExampleExceptionFoo, ExampleExceptionBar` to more clearly indicate that an interface would define what exceptions it could throw as part of the contract. Individual implementations could extend those base exceptions if they need to.

There is not much difference between this and using error types, except the syntax sugar.


It is supported, but does not actually solve anything, you're still limited to whatever checked exceptions the creator of the interface has decided might be valid (it's just that in the original it's "none").

Let's say that the creator of the interface allows IOException, but the backend of my implementation is a database so I have SQLExceptions, same issue.

Is the creator of the interface supposed to add every checked exception in the standard library? Ignoring that this still isn't all of them, then the caller is hosed because they have to handle (or rethrow) every single one of them which is no better. So they probably go "fuck it" and just handle / rethrow Exception. At which point you can just do that on the interface, and it's not helping anyone, and you're better off just not saying anything.

Checked exceptions simply don't mesh with the language: because the language provides limited to no way to abstract over them they're an issue every time you're trying to be generic over anything, the only situation in which they kinda sorta work is if the entire callchain is concrete, which not only is very limiting but it's very much not idiomatic.


> Let's say that the creator of the interface allows IOException, but the backend of my implementation is a database so I have SQLExceptions, same issue.

The creator of the interface should decide on a generic exception type:

  public interface UniversalStorageInterface {
      void store() throws StorageException;
  }
Then, in the present, the FileStorage can define (and throw) a FileStorageException (inherits from StorageException), and the SqlStorage may define (and throw) a DatabaseStorageException (same).

In the future, where we might want to store everything on a (non-existing yet) Cerulean backend, we would then define (and throw) a CeruleanStorageException (again, an implementation of StorageException), and our basic Interface would not need to change. We would also have no need to recompile FileStorage or SqlStorage (or proprietary SteelBlue storage where we have no code).


So now whoever creates the interface needs to define an exception type per method, and whoever implements the interface needs to define a subclass of that per method and catch-and-wrap every exception their callee raises.

You better add some serious tooling built into the language to facilitate this, because from experience ain't no way anyone's going to bother with this if they have to handroll it, even with IDE codegen assisting.

And it still doesn't solve the issue of generic interfaces like streams.


It doesn't have to be a type per method, and if someone is "going to bother with this" depends on the application — I guess in many cases it's perfectly fine to write a happy code with a single catch where you present a sad-face emoji and "try again later" line; in other types of applications this just won't fly.


Whether you do it with exceptions hierarchy or error types or something else, it's pretty much the same thing: you either handle all the errors or not (or anything in between).

How do you solve that error complexity with any other approach and how is that different using any other syntax sugar?

The only thing I dislike about exceptions is that they are ignored by default allowing people to all too easily write code with no error checking.


Note that StorageException in the previous example could also simply be a BaseStorageException that all the methods throw, even if one of them is DiskFullError(BaseStorageException) or ObjectStorageLimitExceededError(BaseStorageException), and another is InvalidFilenameError(BaseStorageException).


Maybe your SQLException should be caller's IOException? I don't code Java but IMO callers should be hit with errors at the right abstraction level. If caller is using that interface and not your low-level backend functions directly then caller should expect IOException not SQLException.

If the interface does not provide exception types for all cases so you are stuck rethrowing your error as a wrong one, and I'd say that means the interface is bad but not checked errors are bad...


“bad” might not be the right word. Maybe better “usefulness for current population of developers is very limited”.


This is indeed the supported way to do it.


Thanks for confirming as I am not familiar with Java, but good to know it does make sense. As such, I am not sure what did the original author complain about because that's perfectly sufficient language support.


The sad truth is: it's pointless to talk about that with those who are not used to structures like Either/Result/IO monads. And those who are already know why and don't need such articles.


Java has a lot of "throwingness polymorphism". You can extend an interface to say a method throws a subclass of the Exception thrown by the parent interface method. You can extend an interface and say that that method actually doesn't throw an exception at all.

  interface NonThrowingAutoCloseable extends AutoCloseable {
    void close();
  }
You can also declare that you throw a generic exception type

  interface ThrowingSupplier<E extends Exception> {
    void get() throws E;
  }


Erlang does this the right way: the process fails (the Erlang sub-process) but the super visor tree anticipates such failures and will restart the process that failed which if the rest of the application is properly designed will handle the error without loss of consistency. Depending on the failure it can also escalate further up to deal with larger and larger levels of malfunctions. Properly implemented short of gross hardware failure (all nodes down) this will result in a system that is always at or near the maximum possible availability as permitted by the hardware.


I think Swift’s error model is my favorite (though Go’s new error stuff seems similar). Unrecoverable errors are unrecoverable (they’re aborts) and other errors are marked in the function’s signature and must be handled explicitly. This means the error effect is explicit along the call stack until somebody handles it


They didn't fail. I find them useful and great in code reviews where I can see juniors trying to get away with bad practices because they don't feel like making the possible issues known to business owners.


I think the reason is simpler: Unchecked exceptions.

Investing in (checked) exception safety is not as valuable when anything can throw an NPE.


Checked exceptions failed because developers are lazy and don't want to check for errors.

Checked exceptions force you to do more work, but it's essential work to make your code more robust.

Replacing them with runtime exceptions gives you the illusion of clearer code, but all you have is actually code that is more likely to crash.


It's not a matter of laziness. Most of the time there's just nothing you can do.

Like if you try to construct a URI based on a configured URI string, and get URISyntaxException... wtf should you even do? You can't recover, because your code shouldn't be mutating config. Similarly with basically any JSON parsing exception. And most disk read/write exceptions.

And in modern applications the caller is probably on the other side of an RPC call, and checked exceptions won't propagate sanely across the RPC anyway.


Checked exceptions are great. But they were abandoned due to the functional paradigm so now it’s cumbersome to use them. Errors should be first class of any language and not bolted on as some sort of afterthought, language or design. Errors are something that occurs in real life and should be handled as such. Unchecked errors are a pain to deal with and always are unexpected which you need to dig into the code to see if they are thrown. This is a huge issue for me. I believe that Java should and could make them more easier to use by adding language support to transform them.


They were abandoned long before “the functional paradigm”. They started being a problem when the Java community decided interactions should go through interfaces over concrete types, and it only got worse with genetics then functional streams.


Not sure about that. Reinhold said so explicitly during a conference when they launched Java 8. I don’t see interfaces over concrete types as an issue here.


That checked exceptions are an issue with interfaces is literally what TFA is about.


Yeah, and he’s wrong about it.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: