It's not just that. Elm and Rx lack continuous value streams at all, they are completely discrete; also they are very much asynchronous, so they are more like the event stream manipulation done via data flow in the 70s rather than what Elliott and Hudak invented in the late 90s.
For something to be called FRP, they at least need both continuous and discrete abstractions; the simplest description of FRP involves re-evaluation A + B over time, which is not meaningful in Rx or Elm, really.
I would love to see an example. I've browsed Evans' thesis, and it seems like he explicitly avoids continuous abstractions (behaviors) for event processing.
In Elm behaviors and events are the same thing. Usually operations like filter and fold are only on events, and applicative functor and monadic operations are only on behaviors, but in Elm one type support filter and fold and the applicative functor operations, and monadic bind is not supported at all. I don't think that's good design, but it does mean that Elm has the functionality of both events and behaviors.
I think this really gets to the core strength of Haskell, which for me has always been having code that we can rapidly prototype but migrate into production and have living in the codebase for a long time. I've written code in good deal of languages and haven't found a language with the power/weight ratio that Haskell has.
The slight cost I have found though is that a lot of the tools he mentioned have a rather steep learning curve ( monads, parsec, repa, conduits, lens ) especially for those coming from an imperative background. But learning any of them is an investment in your programming skills that pays off ten-fold down the road.
Completely agree. 9 times out 10, something compiles and works as expected, and can stay working in production for years. I don't know of any other language where I can do this and keep developing (and prototyping) the codebase.
(This is not to say that testing isn't necessary--in fact Haskell offers an amazing framework for fuzz testing, QuickCheck--but 100% code coverage unit tests is a pointless endeavor in Haskell.)
I will say that "rapid prototyping" of new applications is a bit tedious in Haskell (mainly because of the cabal etc. bootstrapping work you have to do), but prototyping new features in existing ones is smooth.
He said that aiming for 100% code-coverage tests in Haskell is pointless, not that unit-testing itself is pointless. Which I think is a pretty reasonable given that a lot of bugs cannot exist by construction in a lot of Haskell code.
Personally I very much enjoy unit-testing in Haskell especially with the new tasty library which makes combining all the Unit/QuickCheck/SmallCheck tests very pleasant.
Indeed. I'm not a fan of 100% code coverage tests in any language, but it's especially apparent in Haskell where, as you said, if code compiles (and you keep most of your functions pure), there are whole classes of bugs that simply cannot exist.
"100% code coverage unit tests is a pointless endeavor"
Thirsteh said 100% coverage of unit tests is pointless, not that unit testing itself is pointless. I'm not totally sure what thirsteh means by "100% is a pointless endeavor" though.
> Completely agree. 9 times out 10, something compiles and works as expected, and can stay working in production for years. I don't know of any other language where I can do this and keep developing (and prototyping) the codebase.
How is this even possible? Haskell can't catch the logic errors done over time and space, and these errors are the majority of errors in all programming languages, Haskell included.
I'm not trying to say you can't make errors in Haskell, just that, if you know what you're doing, slip-ups are extremely rare in the language. (Hence "9 times out of 10.")
It's true, to some extent, with other statically checked languages, but they usually don't have separation effects, sum types, etc.
> So it's 9 times of ten if you know what you are doing, which is hardly the most often case.
If you don't at all know what you're doing, you're gonna fail every time, anywhere. It is obviously subjective, and I don't think I implied otherwise.
Do you know what sum types are? If so, won't you agree with me that there are many kinds of errors you can make in languages without them than in ones where the compiler complains if you haven't e.g. covered each constructor in a pattern match?
Even more so with separation of effects, and other things that only really exist in Haskell.
> If you don't at all know what you're doing, you're gonna fail every time, anywhere.
Not really. In programming, most of the time you think you know what you are doing but you really don't remember all the states and all the details, which results in bugs.
> Do you know what sum types are?
I do.
> If so, won't you agree with me that there are many kinds of errors you can make in languages without them than in ones where the compiler complains if you haven't e.g. covered each constructor in a pattern match?
In non object-oriented languages or in object-oriented languages without abstract methods, it's certainly possible.
In object-oriented languages with abstract methods, it's not possible: for each missing case the compiler will complain with an error.
> Even more so with separation of effects, and other things that only really exist in Haskell.
Separation of effects doesn't really improve the situation with bugs. A bug may have its cause inside a pure computation.
Also, it has a fairly shallow learning curve when it comes to syntax (compared to say, lisp), at least for someone coming from a C family background. I learned Haskell after a period of doing a lot of Python programming, and I find the syntax of the two very similar.
I would prefer Haskell with a Lisp syntax. Haskell's syntax adds a lot of accidental complexity to the language. My #1 complaint is how infix operators can be imported and assigned precedence/associativity. It is nearly impossible for a human to parse unfamiliar code, let alone understand it. And the sheer size and complexity of the grammar also rears its head when doing non-trivial metaprogramming.
We create these problems for ourselves when we write languages with complicated grammars. And for what? So people don't have to learn how to read Lisp code? A lot of the gain of Haskell is that the super strong type system constrains the form of programs, making it easier to reason about them. Well, constraining the syntax has similar benefits. One being that now my editor can more effectively "reason" about the surface syntax of a program. See Paredit, which enables functionality that isn't possible unless you can easily find the extent of an expression.
Lisp has a lot of syntax complexity that lies hidden under the covers. Sure, symbolic expressions appear regular and uniform but macros are often radically different from functions in syntax, despite looking otherwise identical in application. A big offender in this regard is the loop macro.
Haskell has a much more baroque syntax than Lisp and it really doesn't compare to the C family syntactically or executionally. At best it's a kind of Pythonized Prolog that thinks Perl is a model rather than a warning.
> it has a fairly shallow learning curve when it comes to syntax
Now imagine a pilot saying: "fighter jets have a fairy shallow learning curve compared to gliders"... Syntax has nothing to do with it and you will never find a language with simpler syntax than Lisps.
(and let's just pretend Common Lisp does not exist - I bet you are scared by Lisp because you've seen code written by an experienced CL hacker, and I know, it tends to scare the shit out of you at first sight :) )
Lots of great Haskell material on his site. In particular this derivation of the State monad from first principles was the "Aha!" moment for me understanding monads.
This is a post about categorical monads, I assume this is why it doesn't use the standard symbols that the Haskell monad uses (>>=, return) and instead refers to the natural transformations (η, μ) as mathematicians do.