The thing I miss most is the Maybe monad. In idiomatic Haskell, Maybe serves roughly the purpose that null serves in Java -- a value may or may not be present. You don't need monads to query a Maybe value -- it works roughly like a Java Optional -- but working within the Maybe monad allows you to write a series of operations as though the Maybe value always has an accessible value and know that the whole series will short-circuit to a Nothing value if a Nothing is ever examined.
For a series of operations, is that one monad or a series of monads?
If I have to use a monad for each operation, that feels like the same order of magnitude of effort as checking for null after each operation. If it's just one monad, then it feels like the monad saves work.
For that matter, the monad may still save work, because I only have to think about how I handle Nothing in one place, at the end of the series of calls. Whereas with null, I have to think about how to handle it after each call that can return null.
Just one. If you have a series of functions that could return Nothing (Maybe's version of null), you chain them like:
foo >>= bar >>= baz
This will automatically return Nothing if any of the contained functions do. This does require all of the function to have the type (a -> Maybe b), indicating that they take a value and can return either another value, or Nothing. If you have a function with type (a -> b), indicating that it is guarenteed to return something, you can wrap it like so:
foo >>= return . bar >>= baz
A complete usage might look like:
f x = fromMaybe defualtValue $ Just x >>= foo >>= bar >>= baz
Are you implying that Java cannot be used to make useful software?
No, of course not. Are people without feet useless?
What does it mean to say that Java does not have feet?
It means that it lacks a capability for which it also lacks a need (shoes). This capability is that of expressing purity via the type system. Monads - and by extension, monad transformers - are a tool designed to assist the expression of imperative algorithms in a pure context.
The reason I phrase it this way is because people often make the mistake of assuming Haskell is lacking expressive power due to its purity. This is untrue. Haskell is perfectly capable of expressing messy, mutable, imperative algorithms just like any other language. Other languages, on the other hand, struggle to express many of the pure algorithms and data structures used in Haskell; they simply lack the ability to enforce purity within the language and are thus reduced to purity by convention.
Does lack of ability to enforce purity equal lack of ability to express the algorithm/data structure? Or does it just equal lack of ability to prove the purity?
You can only express some kinds of purity if you can enforce it at the type level. An otherwise pure data structure that contains mutable elements is a violation of purity; a contradiction. If you cannot express a data structure which guarantees that its elements (and its elements' fields/elements) are pure, you cannot express a pure data structure (also known as a value).