Hacker News new | past | comments | ask | show | jobs | submit login
Is Haskell the Cure? (mathias-biilmann.net)
175 points by bobfunk on Oct 3, 2011 | hide | past | favorite | 89 comments



I gave haskell a shot as some of my earlier github repos indicate: https://github.com/substack. I even wrote my blog in haskell with happstack, since snap hadn't gotten popular yet.

Haskell is very hard, but even after 3 years of pretty intensive use, I never really felt productive with haskell in the same way that I've felt in other languages. Anything I did in haskell required a lot of thinking up-front and tinkering in the REPL to make sure the types all agreed and it was rather time-consuming to assemble a selection of data types and functions that mapped adequately onto the problem I was trying to solve. More often than not, the result was something of an unsightly jumble that would take a long time to convert into nicer-looking code or to make an API less terrible to use.

I built an underwater ROV control system in haskell in 2010 which went well enough, but I had to tinker with the RTS scheduling constantly to keep the more processor-hungry threads from starving out the other ones for CPU. The system worked, but I had no idea what horrors GHC was performing on my behalf.

Later I built the first prototype of my startup with haskell, but the sheer volume of things that I didn't know kept getting in the way of getting stuff done in a reasonable time frame. Then we started to incrementally phase out haskell in favor of node.

I write a lot of node.js now and it's really nice. The whole runtime system easily fits into my head all at once and the execution model is simple and predictable. I can also spend more time writing and testing software and less time learning obscure theories so that the libraries and abstractions I use make sense.

The point in the article about haskell being "too clever for this benchmark" sums up haskell generally in my experience.


It's pretty much what I mean when calling Haskell hard to learn. For me it's also been a steep learning curve, and my experience hasn't been altogether different than yours.

I started out maybe 5 years ago following tutorials, reading up on all the metaphors about Monads and doing project Euler problems.

After a while I started to tackle some small web related things with Haskell and had exactly your experience of running into a lack of understanding of how the system works and wrapping my head around functional datatypes.

I pretty much gave up on Haskell as a practical language at that point, but something kept me coming back once in a while.

Then at a point I had a use for making a small web service fast and the Node prototype I made performed badly and crashed in spectacular ways under high loads. I found Snap and made a quick prototype in Haskell. At that point the experience of years of small experiments must finally have made something click. In a very short time I had a very fast service using almost no memory. It's deployed in production (as a part of http://www.webpop.com) and has been extremely stable.

By now I think I've crossed some kind of barrier, and feel like I'm both being productive and having fun when writing Haskell, but it really didn't come easy to me and all else being equal my experience tells me that a good deal of my colleagues would have an even harder time.


I think part of the issue with learning haskell is that it seems to invert the typical learning strategy for programming languages. Usually the best advice is read a little, then write a lot. Typically you can just look and some published code and go "ah yes that's how you do it". But I find, for better or worse, Haskell really requires you to understand before you code. Which in the end means your study to code ratio is very different than almost any other language.

Most languages, even lisps, are somewhat tolerant of 'programming by guessing' for beginners. Usually you write terrible code that works, learn more and see what you did wrong. Haskell is very unforgiving of this, if you don't understand why it works it probably won't


I think you're wrong at "if you don't understand why it works it probably won't".

While I long given up PUI (Programming Under Influence) I still occasionally do some in Haskell. After a litre of beer I am pretty dumb, but I can follow clues from compiler to get something working.

Most of the time, it works the next day, when I sober. That's in contrast with C/C++. Scripting languages give some power like that, but I can screw myself with them much more violently.

In my humble opinion, Haskell is the language of choice for drunken programmers.


I recently had that same epiphany. For fun, I decided to re-implement a simple chat server I wrote a while back in Erlang. I found that everything clicked - the type system worked with me instead of against me, and I was able to create prototypes as quickly as I could think of them. But, it took two years of thinking about Haskell before I could synthesize code in it. (code here: https://github.com/dmansen/haskell-chat)


Did you perhaps jump into the water too quickly? I'm currently learning a couple of functional languages (including Haskell) and using it in production environments but my current use is restricted to "I have an input that will always produce a certain output. There are no database or environmental dependencies, this is straight computation. I want to never have to worry about this function ever again". And so far, knock on wood, haskell has been killer for that scenario. I'll probably eventually transition a lot more of my code to functional languages, but will do so slowly (using Go otherwise).


Haskell is beautiful, I love it, but I can easily see his points. There are a few traps one can easily fall in:

- Reach a point in a complex application where it becomes hard to reason what laziness will do to performance.

- End up in type-hell. E.g. some libraries extensively use existential quantification of type variables. Before you know it, you are chasing "type variable x would escape its scope"-type or error messages, in perfectly fine looking code.

- Pattern matching is nice, but if you extensively use it, adding a constructor argument is a lot of work.

- No-one uses the same data type for common things. For instance, for textual data, there is String, Data.Text, Data.Text.Lazy, Data.ByteSting, and Data.ByteString.Lazy. These days there is more or less consensus on when to use which type, but you are often converting things a lot. There are also types of data for which no consensus is yet reached (e.g. lenses).

- Artificially pure packages. There are some packages that link to C libraries, but (forcefully) provide a pure interface. (Or in other words: purity is just convention).

- For a lot of code you end up using monads plus 'do' notation, making your programs look practically imperative, but an oddball variation of it.

- Using functions with worse time or space complexity, to maintain purity.

- I/O looks simple, but for predictable and safe I/O you'd usually end up using a library for enumerators. Writing enumerators, enumeratees, and iteratees is unintuitive and weird, especially compared to (less powerful) iterators/generators in other languages.

Learning Haskell is something I'd certainly recommend. It provides a glimpse of how beautifully mathematical programs could be in a perfect world. Unfortunately, the world is not perfect, and even Haskell needs a lot of patchwork to deal with it.


> Artificially pure packages. There are some packages that link to C libraries, but (forcefully) provide a pure interface. (Or in other words: purity is just convention)

Explain? What would the alternative be?

> Using functions with worse time or space complexity, to maintain purity.

This seems like the opposite of your previous complaint.

> For a lot of code you end up using monads plus 'do' notation, making your programs look practically imperative, but an oddball variation of it.

This seems to be a "psychological problem" with Haskell: the idea that because Haskell supports declarative, it's not OK to be imperative. It makes beginners tear their hair out looking for 'do'-free solutions when they could just use 'do'. C.f., "Lambda: the Ultimate Imperative" (and the rest of that series of LtU papers) http://dspace.mit.edu/handle/1721.1/5790


Explain? What would the alternative be?

Box the value that is the result of evaluation an expression that calls impure code in IO?

This is what I'd expect for calling impure code in third-party libraries.


If the library developer can prove that a C operation is pure, why shouldn't he tell Haskell about that?


And if you don't trust the developer, it's easy to fix his mistake:

    unUnsafePerformIO :: a -> IO a
    unUnsafePerformIO = return


Sorry, that doesn't actually work.

    ezyang@ezyang:~$ cat Test.hs
    import System.IO.Unsafe
    unUnsafePerformIO = return
    main = do
      let a = unUnsafePerformIO (unsafePerformIO (putStrLn "boom"))
      a
      a
    ezyang@ezyang:~$ runghc Test.hs
    ezyang@ezyang:~$


Yes, true. Patch the library if the annotation is wrong :)


I would say that after a little more experience, space leaks are the only thing that really worries me in Haskell. It's one of those things that I have to think about a little too much to really feel "safe" about. (The other worry is expressions that evaluate to ⊥ at runtime, but it's been shown that static analysis can solve that problem. I don't actually use those tools, though, so I guess I'm a tiny bit afraid of those cases. Like with other languages, write tests.)

Your other concerns don't seem too worrisome to me. Type hell doesn't happen very much, though there are some libraries that really like their typeclass-based APIs (hello, Text.Regex.Base) which can be nearly impossible to decipher without some documentation with respect to what the author was thinking (``I wanted to be able to write let (foo, bar) = "foobar" =~ "(foo)(bar)" in ...'').

The data type stuff can be confusing for people used to other languages, where the standard library is "good enough" for most everything people want. A good example is Perl, which uses "scalar" for numbers, strings, unicode strings, byte vectors, and so on. This approach simply doesn't work for Haskell, because Haskell programmers want speed and compile-time correctness checks. That means that ByteString and Text and String are three different concepts: ByteString supports efficient immutable byte vectors, Lazy Bytestrings add fast appends, Text adds support for Unicode, and String is a lazy list of Haskell characters.

All of those types have their use cases; for a web application, data is read from the network in terms of ByteStrings (since traditional BSD-style networking stacks only know about bytes) and is then converted to Text, if the data is in fact text and not binary. Your text-processing application then works in terms of Text. At the end of the request cycle, you have some text that you want to write to the network. In order to do that, you need to convert the Unicode character stream to a stream of octets for the network, and you do that with character encoding. The type system makes this explicit, unlike in other languages where you are allowed to write internal strings to the network. (It usually works since whatever's on the other end of the network auto-detects your program's internal representation and displays it correctly. This is why I've argued for representing Unicode as inverse-UTF-8 in-memory; when you dump that to a terminal or browser, it will look like the garbage it is. But I digress.)

I understand that people don't want to think about character encoding issues (since most applications I use are never Unicode-clean), but what's nice about this is that Haskell can force you to do it right. You may not understand character sets and character encodings, but when the compiler says "Expected Data.ByteString, not Data.Text", you find that ByteString -> Text function called "encodeUTF8" and it all works! You have a correct program!

With respect to purity; purity is a guarantee that the compiler tries to make for you. When you load a random C function from a shared library, GHC can't make any assumptions about what it does. As a result, it puts it in IO and then treats those computations as "must not be optimized with respect to evaluation order", because that's the only safe thing it can do. When you are writing an FFI binding, though, you may be able to prove that a certain operation is pure. In that case, you annotate the operation as such ("unsafePerformIO"), and then the compiler and you are back on the same page. Ultimately, our computers are a big block of RAM with an instruction pointer, and the lower you go, the more the computer looks like that. In order to bridge the gap between stuff-that-haskell-knows-about and stuff-that-haskell-deson't-know-about, you have to think logically and teach the runtime as much about that thing as you know. It's hard, but the idea is that libraries should be hard to write if they'll make applications easier to write. If everyone was afraid to make purity annotations, then everything you ever did would be in IO, and all Haskell would be is a very nice C frontend.

For a lot of code you end up using monads plus 'do' notation, making your programs look practically imperative, but an oddball variation of it.

That's really just an opinion, rather than any objective fact about the language. I find that do-notation saves typing from time to time, so I use it. Sometimes it clouds what's going on, so I don't use it. That's what programming is; using the available language constructs to generate a program that's easy for both computers and humans to understand. Haskell isn't going to save you from having to do that.

Using functions with worse time or space complexity, to maintain purity.

ST can largely save you from this. A good example is Data.Vector. Sometimes you want an immutable vector somewhere in your application (for purity), but you can't easily build the vector functionally with good performance. So, you do a ST computation where the vector is mutable inside the ST monad and immutable outside. ST guarantees that all your mutable operations are done before anything that expects an immutable vector sees it, and thus that your program is pure. Purity is important on a big-scale level, but it's not as important in a "one-lexical-scope" level. Haskell let's you be mostly-pure without much effort; other languages punt on this by saying "nothing can ever be pure, so fuck you". I think it's a good compromise.

I/O looks simple, but for predictable and safe I/O you'd usually end up using a library for enumerators. Writing enumerators, enumeratees, and iteratees is unintuitive and weird, especially compared to (less powerful) iterators/generators in other languages.

IO is hard in any language. Consider a construct like Python's "with":

    with open('file') as file:
        return file
That construct is meaningless, since the file is closed before the caller ever sees the descriptor object. But Python lets you write it, and guaranteeing correctness is up to you. In Haskell, that's not acceptable, and so IO works a little differently. Ultimately, some things in Haskell are a compromise before simplicity of concepts and safety guarantees at compile time. You can write lazy-list-based IO in Haskell, but you can run out of file descriptors very quickly. Or, you can use a library like iteratees, and have guarantees about the composability of IO operations and how long file descriptors are used for. It's up to you; you can do it the easy way and not have to learn anything, or you can do some learning and get a safer program. And that's the same as any other programming language.


Haskell is great for pure algorithms like that.

As for jumping in too quickly? I was a pretty heavy haskell user for about 3 years.


I've also never been productive with Haskell. It's cute, it raises interesting problems if you enjoy wrangling with mathy problem for the sake of it, but when it comes to getting stuff done in a deeply imperative, eager world, the impedance mismatch is simply overwhelming.

Moreover, I was very proficient in OCaml before I discovered Haskell, and it just spoiled be. It has all of Haskell's qualities which matter (type inference, algebraic data structures, a naturally functional mindset) without the parts you regularly have to fight (mandatory monads and monad transformers, algorithmic complexity in a lazy context, tedious interfacing to the underlying OS).

If you felt like Haskell had many amazing qualities, spoiled by a couple of unacceptable flaws, especially when it comes to acknowledging how the real world works, I'd suggest that you give a try to OCaml. You should be proficient with it within a couple of days.


I believe you are attributing a library issue to a language. Before today (and by today I literally mean a month ago when Yesod released a cross-platform development server that automatically re-compiles your web application) there wasn't a productive set of libraries and tools to build a web application with in Haskell. 3 years ago when you started, and even until 1-2 years ago the library situation was absolutely horrible. Web frameworks with very little to offer, mediocre templating languages, not even an attempt at a database ORM. Tutorials would have you write a bunch of code to achieve a detail taken for granted in libraries used in web frameworks of other languages.

Please take a look at doing real-world, productive web development with Yesod. http://www.yesodweb.com

You are still going to take a productivity hit in Haskell due to lack of libraries in comparison to Ruby, Python, etc. So the practical reason for using Haskell today is to take advantage of the amazing performance, take advantage of Haskell non-web libraries in the backend, or for a high assurance project where its type system can rule out most common web development bugs.

oh, and Yesod is even faster than the mentioned Snap framework which is already much faster than Node (and unlike Haskell, Node does not scale to multi-core). Although Yesod isn't going to automatically cache the fibonacci sequence for this artificial benchmark because in the real world I have never once been tasked with writing code like that for a web application.


I believe you are attributing a library issue to a language.

Reasoning about laziness? Polymorphism that can only be implemented using existential types plus Typable? Even purity is a double-edged sword (some algorithms are inherently mutable)[1]. Some of Haskell's problems in real-life projects can definitely be attributed to the language itself.

So the practical reason for using Haskell today is to take advantage of the amazing performance,

My experience with everything from simple checksum functions to parameter estimators (ML) is that Haskell is generally at least 2-10x slower than C (even when introducing strictness where necessary, unboxing constructors, etc.). So, in practice you'll often end up doing heavy lifting in C anyway (whether it is a database server or a classifier that works in the background), and in the end it doesn't matter so much whether you use Haskell or a dynamic language (performance-wise) if a significant amount of time is required processing requests.

where its type system can rule out most common web development bugs

Right, this is where Haskell currently has an edge, because it does not only make it easy to make DSLs (as e.g. Ruby), but typechecks everything as well.

oh, and Yesod is even faster than the mentioned Snap framework which is already much faster than Node

Yes, but the benchmarks you implicitly point to (the pong benchmark) is very synthetic and says fairly little about real-life use. Until we see Snap and Yesod more in production, the jury is still out.

[1] Sure, you can do quicksort in the ST monad, but it will require a lot of unnecessary copying.


Yes, reasoning about laziness and difficulties using types are library issues. Particularly if a library is forcing you to learn about existential types. In Yesod we are very conscientious about what types (even just polymorphism) that is exposed to the user, because they can make error messages, etc difficult.

I don't think the Pong benchamark http://www.yesodweb.com/blog/preliminary-warp-cross-language... is that synthetic - I think it demonstrates concurrency capabilities fairly well. We just have to keep in mind which web applications benefit from high concurrency.

As for raw performance of a single request, I agree that the average web application won't see a great difference for the 80% case. However, for most Ruby web applications that I have worked on I have had to spend time re-writing slow parts of the application because Ruby was truly the bottleneck, and I would have been much better off using almost any compiled language with types.

Ruby applications I have worked on always have more complicated deployments, worse response times, and huge memory usage due to the lack of async IO. Async IO is possible in Ruby & Python, but it still sucks because it is extra work and you have to always be on guard against blocking IO. So I hope we can at least agree that async IO is a big win, and that Haskell & Erlang are the best at async IO because it is built into the runtime and no callbacks are required. And likewise deployment to multi-core is no extra effort in Haskell/Erlang, whereas in Node, Ruby, or Python you will need to load balance across multiple processes that are using more RAM.


Yes, reasoning about laziness and difficulties using types are library issues.

I disagree, if the language were strict by default, this was not an issue. It is a language problem that is forced on libraries.

However, for most Ruby web applications that I have worked on I have had to spend time re-writing slow parts of the application because Ruby was truly the bottleneck,

My point was that Haskell is often a lot slower than C of C++, so people will rewrite CPU-intensive code anyway. Look at many of the popular Haskell modules where heavy-lifting is done (from compression to encryption), most of them are C bindings. That code will be nearly equally fast in Haskell as in, say Python.

BTW. I am not arguing that Haskell not faster than Python, Ruby, Clojure, etc. But for computationally intensive work C/C++ are still the benchmark, and that is what people will use in optimized code. Whether it is Haskell or Python.

Particularly if a library is forcing you to learn about existential types.

But why is it? Because the language does not support the kind of polymorphism that is commonly used, in an intuitive fashion. People need containers with mixed types that adhere to an interface in some applications. And a commonly-used method to realize this in Haskell is by using existential types.

we can at least agree that async IO is a big win

Yes.

And likewise deployment to multi-core is no extra effort in Haskell/Erlang, whereas in Node, Ruby, or Python you will need to load balance across multiple processes that are using more RAM.

Since most modern Unix implementations do COW for memory pages in the child of process of a fork, this is not so much of an issue as people make it out to be. The fact that you mention Erlang is curious, since spawn in Erlang forks a process, right? Forking is more expensive than threading, but again, in most applications negligible compared to handling of the request.


The biggest reason why there are Haskell packages wrapping C libraries is not for performance, but to reuse good C libraries, and because Haskell has an excellent interface for C libraries. Many people prefer to write Haskell for computationally intensive tasks than C/C++. Depending on the problem it is possible to get within 2x the raw speed of C and you much nicer code to maintain and much easier concurrency/parallelism opportunities.

I have not found it to be the case that existential types are commonly needed (and need to be forced on the user). Maybe you are in a different problem domain. I find Haskell's regular polymorphism to work very well for 95+% of my use cases.

Fork is not negligible to handling a request, but pre-forking theoretically could be. In practice, COW fork does not automatically solve multi-core. The Ruby garbage collector is not COW friendly and thus there is little memory savings from COW (unless you use the REE interpreter which has a slower COW friendly garbage collector but saves on memory and GC time). I haven't looked at this for other languages but I assume this is still a limiting issue. Also, you are still stuck doing load-balancing between your processes, which will limit or complicate your deployment. I don't know much about Erlang other than async IO is built into the language, which is why I mention it in the same breath as Haskell.


In case anybody is wondering: both Erlang and Haskell have very lightweight user-space threads built in, which are mapped onto a small pool of OS threads to take advantage of however many cores you have. It's very slick and fast, and probably the Right Thing.


Yes, there's also akka which is being incorporated as scala standard lib, and F# MailboxProcessor. The thing is that erlang/OTP and its behaviors have many people pounding on heavily loaded apps in production and improving its toolchain, whereas GHC and akka have recently (last couple years I think) been working ot get the stack working: dispatchers and load balancing (like erland reds), and bring GC up to snuff


Akka looks pretty sweet, but it looks like you still have to worry about blocking code in external libraries. In Haskell (and Erlang, IIRC), blocking code is deferred to a background thread automatically so you don't have to be consciously on-guard for it. You also get proper pre-emptive multithreading, while Akka looks like a hybrid of an event loop and a thread pool.

Is this a substantial headache with Akka, in practice?


(late reply)

it's pretty hard to google akka deployments but:

http://www.quora.com/What-companies-are-using-Akka-commercia...

http://groups.google.com/group/akka-user/browse_thread/threa...

and in terms of memory overheads and how many erlang process-type things you can spin up:

http://akka.io/docs/akka/1.1/scala/tutorial-chat-server.html


Why use quicksort over arrays when you can do mergesort over lists and get 1) stable behavior and 2) solution to maximum and k-max problems due to laziness? Do you really need arrays?

And quicksort for arrays in ST monad wouldn't copy anything unnecessary.

Actually, I've seen many claims that some algorithms are inherently mutable. So far none stand close scrutiny.

Matrix operations? You better copy intermediate results, that way you'll be safer and faster (parallel algorithms). Good compilers do that behind the curtain (array privatization).

Sorting? Use maps or lists, that way you won't forget something important.

Graph operations? Immutable (inductive) graphs are slower by a constant multiplier and sometimes are faster than their mutable counterparts (tree-based maps are faster for changes than arrays).

The last one is even more amusing when applied to compiler optimizations (i.e., to non-trivial graph algorithms): http://lambda-the-ultimate.org/node/2443 Pure version is less buggy, faster (!) and allows more optimizations.


Why use quicksort over arrays when you can do mergesort over lists

Sure, you can do merge sort. Except that the list split step in Haskell is O(n) in time, while it is constant when using arrays. As well as merging lists, since you have to 'reattach' the second list as the tail of the first list.

And quicksort for arrays in ST monad wouldn't copy anything unnecessary.

You have to copy the data from whatever representation you had to something that lives in a memory block in the ST monad.

Actually, I've seen many claims that some algorithms are inherently mutable. So far none stand close scrutiny.

You have probably never read Okasaki...

The rest of your argument proposes that slow is better because of persistence. First, persistence is often not required, second persistence can also be implemented in a mutable language.


>Except that the list split step in Haskell is O(n) in time, while it is constant when using arrays.

Oh, no. You shouldn't split list by calculating length.

Try this instead:

   even (x:_:xs) = x : even xs
   even xs = xs
   odd = even . drop 1

   splitList xs = (even xs, odd xs)
Voila! Completely lazy, O(1).

So for merge. See here: http://lambda-the-ultimate.org/node/608?from=0&comments_... The solution contains proper merge algorithm.

And yes, I never read Okasaki in full. But, I use Haskell semi-professionally from 1999 and professionally from 2006.


I agree with you in most cases.

> Sure, you can do merge sort. Except that the list split step in Haskell is O(n) in time, while it is constant when using arrays. As well as merging lists, since you have to 'reattach' the second list as the tail of the first list.

It's no problem writing a merge-sort in Haskell that uses O(n log n) time. So who cares what the asymptotics of the individual elements of the algorithm are? (You may care about the actual speed of the whole thing and its parts, though.)


If you want a canonical example of an algorithm that's imperative and harder to do in a functional setting, cite union-find (See "A Persistent Union-Find Data Structure"). Searching is optimal in a functional setting, too. Just not the classic quicksort.


If you want a canonical example of an algorithm that's imperative and harder to do in a functional setting, cite union-find. Searching is optimal in a functional setting, too. Just not the classic quicksort.


> Polymorphism that can only be implemented using existential types plus Typable?

I'm curious where you came across this. In an external library you were using, or in the process of trying to implement some kind of dynamic typing in your own code?


Both :). To give one specific example: I was working on a transformation-based learner for learning tree transformations. Say that a rule consists of an action and a list of condition that makes the action fire if they are true for a particular tree node. Obviously, you'll want to be able to add new conditions, so you make a type class for conditions:

    class Cond a l where
      applies :: a -> TreePos Full l -> Bool
Now, say that a rule contains a list of conditions which belong to the type class Cond (Cond a l => [a]). You can see the problem coming. Say I provide a condition of the type MyCondition, then the list will be of type [MyCondition]. However, in practice it would be inflexible to restrict a list of rules to one type. You want to be able to add new conditions outside the module or package binary. So, instead I used existential typing for conditions:

  data Condition l =
    forall c . (Cond c l, Eq c, Show c, Typeable c) => Condition c


I had a similar experience with Erlang. Node has totally taken over the erlang shaped hole in my life ;)


Let's have a talk about the $ operator. When you use it more than once per line, you're writing code that looks weird and is hard to read. Switch to the similar function-composition operator, and everything looks more idiomatic.

Instead of:

    fibServer x = quickHttpServe $ writeBS $ B.pack $ show (fibonacci x)
Just write:

    fibServer = quickHttpServe . writeBS . B.pack . show . fibonacci
The case for $ is where you want application instead of composition:

    fibOf42Server = quickHttpServe . writeBS . B.pack . show . fibonacci $ 42
I even write things like:

    main = print =<< foo
instead of

    main = foo >>= print
for consistency.

Anyway, it's a little style thing, but it's nice to use the composition operator (.) when you want composition and the application operator ($) when you want application. It makes the code look nicer and it shows its intent more clearly. And really, they are different concepts, even if they both type-check the same.

And finally, remember that function application, by default, is the highest-precedence operator in Haskell. When you write:

    foo . (bar 42) . baz
It's the same as:

   foo . bar 42 . baz
Because of operator precedence. $ only exists to change the order of operations for a particular expression.


I'll add that a nice little benefit of using:

  a . b . c . d $ e
over:

  a $ b $ c $ d $ e
is that any sub-expression taken from the first expression is valid and can be refactored out into its own name. (.) is associative and ($) is not.


Really interesting style comment. Will keep this in mind.


Please stop with the toy benchmarks and pretty one-liners that show how awesome Haskell is.

There is growing list of smart programmers who get all enchanted with Haskell, jump into it wholeheartedly, and end up frustrated (see bottom of message). GHC makes the typical C++ compiler seem fast. Once code grows past the homework problem size, all hope of understanding memory usage is lost. I don't think people really get how bad that it is. The whole culture of Haskell is based around static checking, yet you have to run a program in order to find out if it blows your memory limit several times over.

Haskell is still a neat language, but we need less advocacy based on toy programs and more honest realism.

(Here's a typical, non-superficial example: http://wagerlabs.com/haskell-vs-erlang-reloaded-0)


That post is from 2005. The situation is entirely different today w/respect to speed (both the compiler and the addition of ByteString and Text libraries), and productive libraries, particularly for web development. Likewise, it is a rare case that you would run into memory consumption issues.

I do agree that there is entirely too much enthusiastic toying around in Haskell and not enough real world users and honesty about limitations.


To me, there's a dichotomy between the "if it compiles, it usually works" aspect of Haskell (and how this is often touted as superior to the dynamic typing, test-driven approach) and that you can't get a picture of memory usage until you run and profile the code. In my experience, hard to understand memory consumption issues are common and take effort to solve.

Reference: http://blog.ezyang.com/2011/06/pinpointing-space-leaks-in-bi...


That blog post says nothing about frequency of memory leaks, but shows that there are good tools to help you in your effort to solve them. One thing to keep in mind is that there are memory issues with every language. I just debugged one in Ruby yesterday, and there wasn't a good tool readily available for that effort. Do you know what the memory consumption of your programs are in other languages are before running them?

As a contradictory anecdote, I have never once had a memory consumption issue with Haskell code. Haskell is actually in a nice position w/respect to memory now that enumerators (which always use constant memory) are taking hold. I have no doubt you encountered many memory leaks, but I don't think your experience completely generalises to modern Haskell.


To be fair, as I understand it the toy benchmark was someone else's and he was offering a haskell implementation.


It is somewhat of a shame that learning curve plays such a significant role for career programmers.

You would expect that people that spend years and years working with their tools would be willing to put a few weeks or months into learning their most important tool: the programming language. It seems most programmers get frustrated and abandon learning of different programming paradigms very quickly.


The problem is not why one "would be willing to put a few weeks or months into learning their most important tool", but why one would be willing to put a few weeks or months into learning the next silver bullet, and repeat that two or three times a year.

Experienced developers have learned that, typically, newer languages are better than older ones, but they typically do not get better by leaps and bounds across the whole domain. Instead, language evolution typically is a matter of two steps forward, ten sideways, and one step back.

Also, experience tells that new languages often get overhyped as making only forward steps. Given that, it does not make sense to switch horses too often, or one would be forever learning, and never be productive.


I don't think anyone has been advocating any programming language as the "next silver bullet".

Even a small productivity gain (say 5%) over a long period of time can make a very large difference, and is worth spending weeks to learn.

Additionally, Haskell isn't a new little "fad" language. It is a pretty old research effort that accumulated many novel and useful ideas that are worth learning. I understand someone who knows Python and does not think learning Perl/Ruby/Lua will teach him anything substantial/new. But I think even a cursory look at Haskell will remove any doubt about whether it contains novel ideas to learn.

Learning about programming languages enriches you as a programmer, and I can't imagine spending a few weeks learning novel languages and the reward not far-outweighing the costs.


Learning about novel/different languages is totally different from learning new languages, which I thought the original remark was about. The former can often be done in a few hours and will pay itself back fairly soon; the latter takes weeks, and after that, you will still be at risk of, months into a project, having to discover that there is no good library to do X yet, or that library Y wasn't the best choice after all, or having to learn some neat trick that makes debugging way easier.

So, yes, I agree that learning about languages is something one should do often, but I do not think one should try to become fluent in a new language (and its libaries) too often.

And yes, IMO that does apply to Haskell, too.


The reason I mentioned learning about Haskell is because it allows you to realize that you should also learn to use Haskell :-)

One cannot be expected to learn the thousands of languages out there. But learning a bit about many of them is possible -- and then learning to use the most interesting ones is most probably worthy.


"Even a small productivity gain (say 5%) over a long period of time can make a very large difference, and is worth spending weeks to learn."

The question then raised is whether any given new programming language or paradigm will bring that 5% productivity gain, and in what circumstances. If it were an obvious and clear path to greater productivity, and superior to other paths to greater productivity (such as spending more time learning your editor and shell and environment, or learning about a new library in the language your work is written in, or learning new tricks in your current language), no one would hesitate to increase their productivity in this way.

You seem to assume that people are choosing not to become more productive by opting not to switch to Haskell or learn Haskell or something about Haskell. There are thousands of programming languages. Shall we learn them all to become five thousand percent more productive?

I'm not opposed to learning new languages. I think folks should tinker. But, I don't think it is provable that learning Haskell will make you more productive than other activities.


Maybe most programmers who bother to look up different paradigms. My experience is that most programmers overall aren't even aware of different paradigms, let alone that things could be better: they're taught what they're taught in school or at home and don't move beyond. I've heard the phrase "Well if you know C++ you know it all" at least three times.


You must have never done any meta-programming in C++ ;). It's pretty much a functional language.

Also, STL provides some infrastructure for FP-like programming (defining functors, argument binding, and providing map/fold-like transformations). But given that C++98 didn't provide lambda functions, it was all a bit too painful.


> You must have never done any meta-programming in C++ ;). It's pretty much a functional language.

I've done enough to scream in terror and run towards a Lisp should anyone suggest such an awful thing! I think there's a huge, huge gap between your 'pretty much a functional language' and standard 'a functional language' [with proper meta-programming capabilities].


Yeah, but the syntax and the verbosity hides your aim.

In the obscure years previous to c++11, meta-programming in c++ would have required a language lawyer.

In haskell, the syntax is so nice that is easily readable, and it doesn't get in your way.


In haskell, the syntax is so nice that is easily readable, and it doesn't get in your way.

Unless you want if-then-else in the do notation (yes, I know that there is a GHC extension for this), disagree with its whitespace rules, or like record syntax (which subsequently pollutes your namespace).

Also, point-free style is nice, but it is easily and often abused, leading to unreadable code.

Yeah, but the syntax and the verbosity hides your aim.

Many people would argue the same of Haskell. So much semantics are encoded in the particular operators, monads, functors, monad transformers, arrows being used, that they are hidden from plain sight.


> Also, point-free style is nice, but it is easily and often abused, leading to unreadable code.

That's why it's called point-less style. It's too seductive.

> Many people would argue the same of Haskell. So much semantics are encoded in the particular operators, monads, functors, monad transformers, arrows being used, that they are hidden from plain sight.

In a sense. But at least Haskell is parseable. And overloading is only done in a very systematic manner. So if something fishy's going on, you at least see strange symbols you haven't seen before.


Not to be to contrarian but until I see proof to the contrary I think Norvig put said it best:

In terms of programming-in-the-large, at Google and elsewhere, I think that language choice is not as important as all the other choices: if you have the right overall architecture, the right team of programmers, the right development process that allows for rapid development with continuous improvement, then many languages will work for you;


Architecture heavily depends on the language. You have to make different choices for C++ than for Java, not mentioning Haskell.

Also, I think that comma before "the right" in Norvig statements means logical AND. If we rewrite that statement it will look like that: if you have the right overall architecture AND the right team of programmers AND the right development process that allows for rapid development with continuous improvement, then many languages will work for you.

I think it is too many AND's here. In most realistic situations you cannot have such luxury.

Also, choice of Haskell (or similar language) allows you to address at least two points from Norvig statement: the right team and right development process.

Those who learned and applied Haskell almost cannot form the wrong team. Almost - as we cannot rule out failure completely.

The right development process is almost ensured by strong type system. Type systems like Haskell's can be viewed as a tool to spread requirement changes through complete program.

(that's why it is seemed hard to introduce or change a constructor in a data type)

So all in all I think that languages make a difference here. For many languages you should fulfill those three points, for some languages those points fulfill themselves.


As much as I loath Java and co, I have to give them that their mature development tools make up a bit for their weakness.

I.e. in Haskell you make a change to a type and propagate it, until the compiler stops complaining. In Java you click some `refactor' button in your IDE, and your changes will propagate through the code base automatically.

That's less of a comment on the languages, since Haskell will probably grow better tools some day, but more a comment on the relative stages of maturity.


the right development process that allows for rapid development, well rapid development is the reason people write webapps in python/ruby/node and not C++. We have to stop pretending that language choice is immaterial, hadn't it for C we would still be typing in our single and double mouse clicks into an OS written in assembly.


Have you ever seen a project succeed or fail where anyone involved said "You know, if we had used a better programming language it would have all worked out"? I have seen projects succeed or fail based on those three things Norvig calls out, I have never once seen a project succeed or fail because of the language used. Is it important that we have moved beyond hand coded assembly? Unequivocally, yes it is. Are you going to fail because of your language selection? Maybe if you pick Assembly, beyond that language selection isn't even a rounding error on probability of success.

You can certainly develop rapidly in C++ if you have the right process and people. In fact that is where that quote came from, Norvig noting that devs at google where not hampered by their choice of Java and C++.


The ecosystem for Haskell is improving rapidly. My startup built a computer vision application on top of easyVision and with the intention of rewriting it in ObjC. Instead we are working with the Haskell community to target the mobile platform. A year ago that would have been a dicey bet.

About our Haskell experience: Yes, the learning curve seems steep, but mainly because of the things you have to unlearn (OMG no for-loops!). However, functions are the most modular things ever invented. That translates into an uncanny ability to add features quickly. A sophisticated type system catches many errors at compile time.


I love Haskell; I do a lot of work with it. That said, I use Python for the web. As nice as Snap is, Haskell just doesn't have the vast array of quality libraries for web development that Python does. Lately, this means that I do web development in Flask, and heavy lifting in Haskell.



Saw your comment about your desire to launch a product with personality. I'd be happy to help (for free) if you want. @BWFeldman


I'll say this again: Ted shouldn't have wasted everyone's time highlighting the response time of the request. He effectively benchmarked V8 right on his blog and then called it slow. Now everyone's complaining that people at least demonstrate that part to be untrue.

Ironically, every one of his clients in the ab concurrency test will receive their responses before the users of the hypothetically parallel Python and Ruby services, because Node responds an order of magnitude faster. So he didn't actually demonstrate a problem.


Why is a Fibonacci sequence used as a benchmark for an argument for concurrent programming? The Fibonacci sequence is a recursive algorithm that inherently has dependencies on previous calculations that prevent effective concurrent execution. The Fibonacci algorithm executed concurrently is going to spend an inordinate amount of time creating tasks that do a trivial calculation (add two numbers together).

If you want to benchmark concurrency, at least pick an benchmark algorithm that exercises concurrency. The FFT comes to mind, but there are probably lots of better examples (that is a challenge to HNers ;-).


He was not benchmarking concurrency, he was pointing out that Node is single-threaded system that essentially implements the old-style cooperative multitasking, where a single task will block everything else. He could have used sleep() and it would have illustrated the same point (and more elegantly, since half of the responses miss the point entirely and focus on the Fibonacci part).

Node developers probably don't do a lot of computationally complex stuff, but when they do, they have to think about the concurrency problem. Even something as trivial as sorting a large list or parsing a huge chunk of JSON is going to stop all other requests from executing.


But he isn't benchmarking web server concurrency because he is doing a single curl

  $ time curl http://localhost:8000
  165580141
  real 0m0.016s
  user 0m0.007s
  sys 0m0.005s
So he is running the Fibonacci (40) once with the web server. The only concurrency / parallelism that is happening is in the recursive Fibonacci algorithm. I stand by my contention that the Fibonacci algorithm is a very poor test of concurrency / parallelism.

I stand by my contention that he should have implemented an algorithm that could be solved in concurrent pieces and then benchmarked node.js against his favorite language. If the algorithm cannot be parallelized effectively, it doesn't matter how many tasks you spawn to solve it (cooperative or otherwise), the algorithm's dependencies will cause all the tasks to block and effectively serialize their execution.


I'm referring to the original post (http://teddziuba.com/2011/10/node-js-is-cancer.html), which did test concurrency. The Haskell guy missed the point entirely and seems to have given up after Haskell is shown to memoize his function.


I think that's where the "well-executed trolling" comment comes in. He essentially whipped everyone into a tizzy by complaining that a framework for helping to get better performance out of I/O-bound web services by handling I/O calls asynchronously doesn't work so well if you take a CPU-bound monkey wrench and jam it directly into the gearbox.

I suspect that it worked so well because the idea of using Fib to talk about performance is kind of built into the collective programmer unconscious. The whys, hows, and whats of using Fib to talk about performance are somewhat less well-entrenched, though. So there's room to trip people up by getting them to go, "Yeah, this sounds interesting, and I recognize all the words so it probably isn't technobabble!"


You are talking about parallelism. Concurrency is demonstrated just fine by the example, as it is being executed on a web server on every request.


Wasn't the point of the original post that node.js blocks the event loop while it executes functions and thus effectively kills concurrency? Not how fast it calculates fibonacci numbers and sends it over http...


The point was that it kills parallelism – Node is just a single-threaded event loop, running on a single core. And since computing fibonacci numbers is a CPU bound activity, that type of benchmark would be relevant but for the memoization bit.

EDIT: Well also, the author would have to actually benchmark this vs. Node with many concurrent clients in order for it to be relevant; here he's just timing a single request from start to finish, which obviously doesn't say anything about how this scales.


Hasn't the author already said that the measuring of Fibonacci was not the point of his tirade? Which makes the line in this post 'I think a lot of people missed the main point of Dziuba's troll" slightly amusing. Is there now going to be someone running this 'benchmark' in whatever language they can? One of the blogs already posted said he's going to find time to run it in C.

I'll do my part. Delphi, here I come. ;)


My benchmark was mostly a parody, since Haskell just memoized the call and never really did the work.

The point of the article was more the difference between the languages that really tackles concurrency (Haskell, Clojure, Go, Erlang) and Node's way of simply offering one solution that works for a lot of problems where the common scripting languages (especially PHP) doesn't work that well.


Haskell broke the Programming Language Shootout by the compiler and runtime optimizing the busy-loop tasks into no-ops and memoizations.

It is hard to write a Haskell program that is gimped enough to be in league with other languages in a synthetic benchmark.


Don't forget Akka (esp Scala-Akka) in that list!



I'm not interested in this Fibonacci benchmark until someone does it in TriINTERCAL.

PLEASE GIVE UP


I don't think Haskell itself is really the answer.

If I read the article correctly it's simply a matter of concurrency and parallelism that's important.

There are a host of languages that do that quite well and Haskell just happens to be one of them.


How those languages fare in other fields, like, constraint programming?

http://hackage.haskell.org/package/monadiccp

The real point is that Haskell is quite good in many areas and is excellent in parallelism and concurrency. While other languages are excellent in concurrency and not so good in other areas.

Those many languages are the answer for the sole field of concurrency and Haskell is the answer when you combine many fields, one of which could happen to be concurrency.


Sure Haskell is good for more than just concurrent programming, but the article was leaning on concurrency and parallelism in its comparison with Node.js

And I should also point out that said "blub" languages can also implement those features which they lack that Haskell includes by design. Some have better features than Haskell, IMO (ie: Qi/Shen sequent types and the ability to turn the type system off when you don't need it).

Again, Haskell is a good language. I just don't see it as a "cure." There are many other options.


How can the "blub" languages implement Haskell's type-classes as libraries? Or generalized type inference? What about Haskell's higher-kinded polymorphism? And pattern matching?

Lisp can do some of these, but it is not exactly a "blub" language. Is there a nice comparison of Qi and Haskell? Once you implement such a large, non-trivial system (such as an advanced type system), I really doubt using Lisp macros rather than implementing a compiler is easier. Macros that do such non-trivial things also do not compose well, so I doubt Lisp is beneficial for this purpose.


I don't think Haskell itself is really the answer.

What was the question?


No one has characterized the right task scheduling outcome.

What tool will give us the the best task scheduling outcome, with appropriate effort?

Node schedules each task to run until it is done or you tell it to do something else. This seems simple and honest. The impact of excessive CPU load is obvious.

Other tools offer automatic task preemption and time slicing between tasks. Once CPU load gets high the impact is much less obvious.


For every application there is a language best suited... Let's stop trying to force every language to be good at everything and then compare them as though they were all the same, shall we?


What's more important is applying some important concepts in haskell - functional programming and dividing your program into tiny self-contained parts. You can write this way in most languages - Ruby, Python, Scala, etc. The fancier parts of Haskell - lazy evaluation, static typing, whatever - are less important to making software that works than its functional nature.


The static typing is essential to making software that works (and scales). Dynamic typing requires a lot more test code and test code is expensive to write, maintain, and repeatedly execute.


I don't think the point of Ted Dziuba's rant was that every request is a large calculation. In Node.js if most results are small and generated quickly when one large calculation request comes in all the small ones stop going out until it is done. A Haskell web server like Snap should not have this problem.


If Haskell is the Cure, what is the Disease?8-))

Haskell occupies a niche similar to Hamilton's quaternions (for classical physics) and Heisenberg's matrices (for quantum mechanics) - not mandatory, inaccessible to the masses and abandoned with haste once a more intuitive tool is found.

But they will always be there if you need them.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: