I went straight from hacking Scala and Haskell as a hobbyist to doing (mostly) front-end JS job, and I've always found that my code, and a lot of good libraries I read, naturally emulate something close to Hindley-Milner typing, by using objects as tuples/records and arrays as (hopefully well-typed) lists, as well as the natural flexibility of objects as a poor substitute for Either types.
I'm definitely pleased to see that the designers of this library have also realized that strongly-typed javascript was just a few annotations and a type inference algorithm away.
I'm just wondering why are nullable types inmplemented as such and not as a natural consequence of full sum types, which are inexplicably absent.
Strongly typed JS is actually pretty hard - probably not by Haskell and Scala standards - but if you take promises for example the signature of `then` is:
That is - a promise's then - takes the promise (as this) and executes either a `.then` fulfillment handler or a catch handler.
If the `fulfill` handler executes the value is unwrapped and either a new value, or a Promise over a new value and its own type of error is returned.
Now, if the `reject` handler is executed the error is unwrapped and either a new value, or a promise over a new value or a new error is returned.
This is quite simple and easy to use because it behaves like try/catch in the dynamic type system of JS with recursive unwrapping - however it is challenging to reason about when you're starting to type code and you want to actually have correct type information with promises.
Static languages generally approach these problems with pattern matching on the type - in JS that's not common nor is it feasible at runtime - you just expect a value of a certain type. When I implemented promises in another language (swift) this was a lot of fun to work through and not very trivial - if their compiler cna do this I'd be very impressed.
Promises are just one example.
Anyway - this looks cool. I definitely agree that full sum types would've made more sense - having explicit nullables is usually a smell (like in C#).
I don't care about the type signature of promises. I actually don't care about the type of anything which has a complicated type; I think it's just incredibly winning that I will now be able to describe the shape of the raw data circulating in my code.
I've always found really annoying, in Haskell, to see incredibly complex type idioms emerging to allow stuff that doesn't really deserve it. And yes, I'm talking about monad transformers.
Don't get me wrong, I think that Haskell and the typing techniques and idioms it has fostered are a tremendous achievement, but right now, I'm more focused on bringing my web code, which right now is unfortunately a jungle of implicitly-typed garbage, closer to a safe and predictable better-typed form.
I tried to do that in Haskell in the back-end, but everytime I tried, I lost mind-boggling amounts of time dealing with the monadic stack of the framework I tried.
As the saying goes, I'm not clever enough to use dynamic typing, and bugs happen. Unfortunately, I'm also not clever enough to use real strong typing, and nothing compiles, let alone gets done.
Hence why I'm immensely thankful to see facebook embracing gradual typing in a way that lets me leverage my knowledge of algebraic typing.
I'd like to argue from a Haskell backend perspective that monad transformers deserve exactly the complexity they expose. They give you a handle to factor side effects in sensible ways and to express that code requires exactly some set of effects and no others.
It is not always clear how to factor code which does not have this rigor into effect-typed form. It can be extraordinarily difficult to recognize what effects are being carried out in which parts of code at first—especially if you haven't been forced into the discipline early. Thus, I find it unsurprising that you feel the translation effort is challenging.
But, coming from the other angle—building things with well-defined effect typing from day zero and composing pieces atop one another to reach your final complexity goal—works exceedingly well. Better, it forms a design which can be translated to untyped settings and retain its nice composition properties.
Which is to say not much more than: there is some logic to all that madness and once you're on the "other side" it's hard to judge these "incredibly complex type idioms" as anything other than useful and nearly necessary for sane code reasoning.
I miss transformer stacks a lot when using other languages.
... said the monk, sitting cross-legged with an air of zen inner peace ; )
I think it all boils down to considerations of idealism vs. realism, in the end, and yes, I have to admit, I managed to get a sort of big picture understanding of Yesod and its ORM, and it's definitely a cool design. Can't say I know as much about Happstack, but the hello worlds were smaller. And the authors didn't have to invent two dependency management tools to get it to compile... /off-topic
I actually used something akin to monads to write a quick and dirty parser combinator library... In PHP! It was really fun, but I have to admit that it got really hard to keep track of what was function, what was return, what was supposed to be passed along to the next function and so on without any real typing. For the first time I... I wanted monads.
But still, I have 99 problems and I'd say 97 of them are undefined, nulls, erroneous typecasts, unexpected layers of wrapping, and so on. I wrote bad code, my teammates did, and here we are. I want to get rid of those so that I can, at last, have real problems.
I'm not a big user of Yesod, so I can't speak too much to that, though it is notoriously difficult to get Yesod to compile unless you use Stackage.
I would say that types are really difficult to do in your head. It's tedious and error-prone. That said, the advantages are high so it's valuable, e.g. your parser combinators in PHP.
Parser combinators are actually a great example of where monad transformers shine. You can see them as nothing more than a stack of `State` atop `Maybe` which dramatically simplifies the presentation and why they work. Better, you can rip out `Maybe` and replace it with `[]` to get non-deterministic parsers "for free"—all of the code remains the same.
I actually happened to write up about this recently:
https://gist.github.com/tel/df3fa3df530f593646a0
But yeah, gradual typing is seductive for existing codebases. You can just make the jump and beat them with hammers of frustration until the compiler gives you a thumbs up---but it's nobody's cup of tea. If you want discipline, you are far better off setting the rules from t=0 and going from there.
Let me know when you start porting to Haskell. That'll be an exciting time.
You make some good points here, I've also experienced the same with Haskell web frameworks. I think we can agree that the most important things to annotate are points of interaction.
If I'm part of a 5 developer team working on a code base, or I'm using a library someone made - I want the functions I'm calling to be very explicit about what they take and return and I want the functions I provide others to be very clear on what they take and return.
The problem is that even something people use every day like a Promise or an event handler creates very complex types (like the example above). I think any viable solution that expects to be type safe needs to be able to express that.
If your code does not expose any callbacks, or anything async I'd say it's simply not very typical JS code.
The alternative of course is to be _less_ safe about our type. We could say that we treat a promise as:
Promise<A> -> A -> Promise<B> -> Promise<B>
(just a `bind` from Haskell) - this would let us maintain _some_ type safety which is better than nothing.
I agree that the facilities to express function composition are woefully insufficient in nearly all languages besides ML derivatives, and that nearly all of the recent frameworks and techniques to express concurrency rely heavily on function composition.
There's certainly a hierarchy; if you don't have typed data then you won't benefit from generics. If you don't have typed containers then forget about monads. If you don't have simple monads then no point worrying about how to compose them.
And yet, as someone who's been working in Scala for nearly 5 years now, more and more of these abstractions are starting to seem "worth it". My first Scala code was quite imperative, mixing random effects left and right. But eventually I started handling async calls explicitly - or perhaps I should say, I became fluent enough in the language that I could make an explicit distinction between sync and async calls without in being too cumbersome - and then I reaped the rewards, with more reliable, more performant, more maintainable code. And then I did the same thing with error handling, replacing surprise exceptions with an explicit Either (stacking this inside the Futures), and again I found my code became clearer, easier to reason about.
I've just finished factoring out database access into a Free monad based construct, and for the first time in my life I can test database access in a smarter way than just creating a (possibly in-memory) testing database and hoping it does the same things a real database would do. The monad tools are good enough to make it easy - as easy as "magic" Spring AOP, but explicit and ordinary. I've written a library for dealing with monad stacks that I'm sure would have horrified myself of three years ago (https://github.com/m50d/scalaz-transfigure), but I've come here through small incremental steps that have made sense at every stage (it helps that I'm a big believer in Agile). If I'd been dropped in it with a language like Haskell where everything has to be monadic from day 1, I think I'd've given up. I still wouldn't use monads for I/O (at least, not yet) - the advantages don't seem worth the overhead. But I'm glad I'm working in a language where these things are possible, and where I can gradually adopt them in my own time.
Let's say a promise is a thing which can either succeed or fail eventually. If it fails, it gives a type e, if it succeeds a type a
Promise e a
Now, `then` operates on the successful result, transforming it into a new promise of a different kind. The result is a total promise of the new kind
then :: Promise e a -> (a -> Promise e b) -> Promise e b
I'll contest now that this is sufficient. Here's what you appear to lose:
1. No differentiation of error types
2. No explicit annotation of the ability to
return constant/non-promise values
3. No tied-in error handling
That's fine, though. First, for (1), we'll note that it ought to be easy to provide an error-mapping function. This is just a continuation which gets applied to errors upon generation (if they occur)
mapError :: (e -> e') -> Promise e a -> Promise e' a
For (2) we'll note that it's always possible to turn a non-promised value into a promise by returning it immediately
pure :: a -> Promise e a
Then for (3), we can build in error catching continuations
catch :: Promise e a -> (e -> Promise e' a) -> Promise e a
We appear to lose the ability to change the result type of the promise upon catching an error, but we can regain that by pre-composition with `then`.
So, each of these smaller types is now very nice to work with. They are equivalent in power to the fully-loaded `then` you gave, but their use is much more compartmentalized. This is how you avoid frightful types.
Couldn't agree more. I see this as just another example of how ordinary function composition usually wins out over OOP.
Additionally, AFAICT Promise-like things aren't actually used very much in Haskell in practice; at least not in the code I tend to see/write. MVar is probably the nearest direct equivalent, modulo error handling (which would probably just be handled using an Either in an MVar). Given that the runtime is M:N threaded, there doesn't seem to be much need for Promise and its ilk, since you can just do ordinary function composition and HoFs (with channels for communication). If you want something fancy you just use the marvellous "async" library.
I think sometimes types lead to simpler designs. For example why can't "then" just have the following type:
Promise A E -> (A -> A') -> (E -> E') -> Promise A' E'
That would eliminate some strange corner cases and make it easier to explain the function. The special cases could just get their own functions.
Real algebraic data types would probably eliminate the need for E entirely and make it even simpler.
I would argue that `then` is better thought of as:
Promise A E -> (A -> Promise A') -> (E -> Promise E') -> Promise A' E'
, with implicit boxing of bare types and thrown exceptions, as well as flattening of superfluous Promise wrappers. The `flatMap`iness of it is what really makes it interesting, in my opinion.
You can't really use sum types as they are found in Haskell in Javascript. With Haskell option types you do a pattern matching and create a new binding for the non-null value but in Javascript you don't do that - you keep using the same object that you tested against null.
case mx of Just x -> f(x)
vs
if (x != null){ f(x) }
What you can do in a Javascript-like language is use union and intersection types. However, they can get a bit complicated (specially if you allow unions of non-primitive types) and the extra flexibility can confuse the type inference a bit so I can understand them restricting things to the common case of handling null.
Right, but now you're not using null. The question was how to justify the use of sum types for null... not whether or not they could be emulated in some non-idiomatic manner.
> I'm just wondering why are nullable types inmplemented as such and not as a natural consequence of full sum types, which are inexplicably absent.
Haskell-style sum types are a generic type with multiple values describing the alternatives.
Since the goal of Flow is to typecheck existing JS semantics, the addition of wrapper types and objects to support such sum types makes very little sense while the addition of "anonymous" union types makes a lot of sense. Dialyzer took the exact same path (except even more so as its type unions can contain values) as it tried to encode Erlang's existing semantics.
This all seem extremely cool.
I went straight from hacking Scala and Haskell as a hobbyist to doing (mostly) front-end JS job, and I've always found that my code, and a lot of good libraries I read, naturally emulate something close to Hindley-Milner typing, by using objects as tuples/records and arrays as (hopefully well-typed) lists, as well as the natural flexibility of objects as a poor substitute for Either types.
I'm definitely pleased to see that the designers of this library have also realized that strongly-typed javascript was just a few annotations and a type inference algorithm away.
I'm just wondering why are nullable types inmplemented as such and not as a natural consequence of full sum types, which are inexplicably absent.