Hacker News new | past | comments | ask | show | jobs | submit login
The Joy and Agony of Haskell in Production (stephendiehl.com)
324 points by stefans on Feb 20, 2016 | hide | past | favorite | 92 comments



> If care isn’t taken, projects can quickly devolve to become unmanageably slow to compile, usually the problem is avoidable with some care.

This is currently not the case. It's sometimes not avoidable at all. Compile times are a real concern right now. A GHC maintainer has even confessed to it being a very significant issue.

https://www.reddit.com/r/haskell/comments/45q90s/is_anything...


As a big Haskell fan, the memory usage of GHC on certain modules is one reason I don't use it as much as I'd want. Several times I've had to double my swap file or increase the RAM on a VPS to compile some Haskell package, and I don't want to impose that on users of my software. Usually people don't build Haskell on their servers, just deploy binaries, but I do all my work on a VPS right now. Anyway, compiler resource use is a pretty big factor for me.


My only experience with compiling Haskell on a VPS involved over 100gb of swap I/O happening during a >24 hours compilation of LambaBot on a 512mb machine.


In a utilitarian sense, the pain caused by this could probably be greatly reduced with a robust system of cached binary builds. Right now we're all just compiling the same dependency trees over and over again, wasting cycles.

So since making GHC drastically more efficient is an enormous problem, maybe getting Stack/Cabal to download binary caches would be the best way to go forward.

There's a Nix build server that offers all of Stackage in binary form. That really takes a lot of pain away, at least if you're on a supported architecture (x86_64, I guess). But it's run by a volunteer individual, I think.

Of course, Debian and others have been packaging Haskell binaries forever, but I don't think most people want to deal with distribution packages for development.

I'm sure there are lots of discussions about this in other places... All I've found from my random googling is this post which is about binary sharing on the same computer:

https://www.fpcomplete.com/blog/2015/09/stack-more-binary-pa...


I've seen Stack download prebuilt binaries for some packages but build others... always having prebuilt versions of text/aeson/vector/lens/etc would definitely speed up resolver upgrades.


http://bazel.io

I've never used the open-source variant, but caching of intermediate targets allowing for speedy incremental builds is a big theme.


Good article. I'm involved with a large-ish OCaml project[1] and some of these things apply there too.

In particular we avoid "academic" features of the language like functors[2] and first class modules, because they obscure the flow of the code and are a headache for ordinary programmers to understand. I often get requests from people from the OCaml academic community asking if we have any available positions, and the answer has so far always been no.

[1] https://github.com/libguestfs/libguestfs/tree/master/v2v

[2] https://realworldocaml.org/v1/en/html/functors.html


>In particular we avoid "academic" features of the language like functors[2] and first class modules, because they obscure the flow of the code and are a headache for ordinary programmers to understand.

If you're hiring OCaml programmers who can't understand (or learn to understand) functors, you're probably doing something wrong. Even using the stdlib Hashtbl requires an understanding of functors.


As I said, the problem with functors is the same as the problem with OO - it obscures the flow of the code. You can look at some function call, and not understand (just by looking at the text) which code executes next. I'd prefer not to hire people who will "functorize" (nor OO-ize) everything. When I said "ordinary programmers", I'm referring mainly to myself, but also to casual contributors.


Hmm, I found functors a bit confusing on my first encounter with the language, but I wouldn't call them 'academic'. If you've ever used Haskell typeclasses, they serve almost the same purpose.

In OO land, they're just a way to ensure that an argument satisfies an interface, exposing certain public functions and properties — just like an object in Java has to implement the Comparable interface in order to be sorted in a collection.


Functors are modules that depend on another module. This works like a function whose argument is a module with type given by a signature.

There isn't anything exactly analogous in Java. Functor application can introduce new type definitions at compile time and enforce static type safety. You need this so that you can statically require that you can only take unions of sorted collections whose types came from the same ordering module applied to the same ordered set module.


Yes, I know my example isn't perfect, but I've found is a good way of explaining the general concept to people not familiar with ML's module system. Thanks for your explanation though!


Haskell typeclasses are a headache that many Haskell experts eschew.


Wow, really? I learned some Haskell on the side a few years ago, and typeclasses were one of my favorite features. They allowed so much expressiveness from such small building blocks. They seemed like a cornerstone of the language.

I'm surprised to hear they're often eschewed.


Highly abstract and general typeclasses for standard mathematical and algebraic concepts are extremely useful and not eschewed. With them you can do genuinely generic programming in a syntactically convenient way. Examples include: Functor, Applicative, Monad, Num, Floating, Integral, MonadState, MonadReader, Profunctor, Contravariant, ...

Typeclasses which are ad hoc, specific, and typically defined on a per project basis, are much more debatable. If you have the concept of "rendering to the screen" and you have typeclass Renderable with a render method then this is really no better than just writing separate functions for each instance of the Renderable. This isn't genuinely generic programming, it's just symbol overloading.


Hey, you're talking about my library Renderable - and I'm totally up to finding a more idiomatic way to express the problem. It just so happened that type classes were the easiest way to do that.


Ha, sorry, didn't mean to make it sound like I was talking about anything in particular! I just made up that example on the spot. A "Renderable" thing fits into a common pattern where you see typeclasses used in this way.

(Are you talking about this library? https://hackage.haskell.org/package/renderable-0.1.0.0/docs/...

I don't actually see a Renderable class there anyway.)


Ahh - I thought you may have read my articles on HN or Reddit - how self centered of me :)

Yes, there's no 'Renderable' typeclass but there are 'Primitive's. There are no laws that the 'Primitive' typeclass instances must obey but there are associated types that need to be satisfied. I tried to write this without a typeclass and I just couldn't get it to check, so I turned to TypeFamilies to encode the relation ship between types. I'd love to make it more idiomatic though!

[I'd love some feedback if you're so inclined](https://github.com/schell/renderable/blob/master/src/Data/Re...)


I wouldn't say they are eschewed.

Haskell programmers do seem to prefer typeclasses that come equipped with some set of meaningful "laws" that help to reject "unreasonable" instances.

And even without that, they are very useful for data type conversions and the like.


Are you saying that you never hire anyone at all to work on your project, or that you think so little of the community that created the language your project uses that you lie to them about job openings?


Neither of those.


> I often get requests from people from the OCaml academic community asking if we have any available positions, and the answer has so far always been no.

I don't understand what you're saying. Are those applicants to Red Hat, or developers asking to contribute to libguestfs that you say no to.


I mean hires. Of course anyone is welcomed who sends patches.


I understand why one would avoid first class modules (I don't use them a lot either). However I fail to see why functors are too complicated, since they are basically just functions. Could you explain ?

Also, according to this[1], There are basically three main contributors, including you, so I fail to see how training would be an issue ...

[1]: https://github.com/libguestfs/libguestfs/graphs/contributors


That graph covers the whole project, not virt-v2v. Is there a way to limit it to particular directories? Not that I could see very easily.


Monads are essentially functors...


The word "functor" means in OCaml something very different to what it means in Haskel.


Not just essentially. All monads are functors.


Not in OCaml. It's a different concept.


There seem to be a hundred language definitions of Functor. Even Haskell's doesn't exactly correspond to the mathematical understanding of the term (if it did, Set would be a functor)

C++ is the worst offender, though.


How is Set a functor in a mathmatical sense?


You can define a function setMap such that if you setMap any function f from a -> b where Set a and Set b are valid types, then setMap f lifts f to a function Set a -> Set b. setMap will also satisfy the functor laws (or law, if you understand the parametricity derivation of the second law, which I don't :) ). Or in other words, setMap id = id.

In fact, the mapping is absolutely obvious. Haskell slightly brain damages us by assuming that setMap would have to apply to all a->b functions, which can't be done because a and b need to be Ord.

Although apparently it _can_ be done in GHC 8. ish.

https://gist.github.com/ekmett/f783ea6af2e041b7b887


... A monoid in the category of endofunctors


Comments based our experience with production Haskell at Front Row:

* stack is pretty solid at multi-package builds, save everything in one single git repo for easier snapshotting. See yesod for reference

* use stack with stackage LTS unless you have a really good reason not to

* TH is nice to avoid, but you also miss out on great libraries like Persistent. Seems reasonably hard to dodge that one if you're sold on the conveniences of the yesod ecosystem

* We've been overall pretty happy with classy-prelude as Prelude replacement. Can throw off beginners at first, but is quite convenient to work with.


I think Opaleye will work well without TH.


This was good. Thank you.

The part that most worried me was If we look at the history of programming, there are many portents of the future of Haskell in the C++ community, another language where no two developers (that I’ve met) agree on which subset of the language to use.

I also perked up when he talked about the difference between pipeline-paradigm coding and coding involving monadic trees.

I'd love to dive into the language, but with F# I'm happy enough that I code as little as possible and create lots of value. While I think that would be even more true in Haskell, it doesn't seem worth the switch -- yet.

Having said that, I'm definitely seeing some parallels between my own experience in F# and this author's experience in Haskell, especially the part about multiple ways to solve a problem. Are there no good books about common FP best practices? Seems like much of what you would consider a "best practice" would apply whether it's F#, OCAML, or Haskell.


There might be good such books, I wouldn't really know. The Scheme-oriented How to Design Programs seems like a good candidate.

I think the Haskell world would benefit from talking a bit more about "design patterns." I've seen the concept get a fair bit of abuse because of the claim that Haskell's abstraction facilities are good enough to eliminate the need for Visitor Pattern style boilerplate—which has truth to it, but loses the baby with the bathwater.

Design patterns as in "common ways to structure programs and parts of programs" are fundamental for people who are learning to program productively. They constitute the repertoire of coding on the level above syntax. They're the principal structures of the idiomatic lexicon. The good ones seem obvious once you understand them, but a poor grasp of them causes vague confusion and big missteps.

In Haskell (and pure FP in general), there are a lot of useful design patterns that beginners pick up from the ambient culture if they're lucky:

  - the reader/state/IO monad transformer stack;
  - domain values as algebraic structures e.g. monoids;
  - functor composition;
  - free monad interpreters;
  - Xmonad-style pure core with I/O wrapper;
  - explicit ID values for representing identity;
  - recursion schemes;
  - and so on and so on.
Sure, Haskell can represent a lot of these things as formal abstractions, and that's wonderful—but people still need to learn when and how to use them and adapt them for their domains.

What I'm describing is far from a Haskell-specific problem, actually—there aren't that many resources in general that focus on these kinds of patterns.

When I was a beginner programmer, I was lucky to find Ward Cunningham's original wiki, the Portland Pattern Repository, which was all about spreading and discussing this kind of cultural conceptual knowledge.

There was a strong humanistic influence (from Christopher Alexander, through the early agile movement before it became a high-octane consultancy buzzword) and Haskell's culture has more DNA from mathematics and logic, and so treats patterns differently...

Still, I would love to see more discussion about functional patterns, at all scales from function implementation to application architecture.


Yes. People say "patterns are just first-class functions"but they miss the point that not all functions can be combined with each other in manageable ways, so there is still an art to it.

Also, stuff like "strictness annotation" is not a function.


> Also, stuff like "strictness annotation" is not a function.

Ah but it should be :)

[Not a function actually, but a type constructor: http://h2.jaguarpaw.co.uk/posts/strictness-in-types/]


This felt like a really fair and unbiased overview of the pros and cons of using Haskell. It's really refreshing to see that kind of honesty without any sensationalism. And it seems to have a lot of really interesting social insights about the situation, which I found surprisingly interesting.


Diehl has a lot of good no-nonsense newbie-accessible writing on Haskell.


For those looking at the where to start (http://www.haskellforall.com/2015/08/state-of-haskell-ecosys...) link in the post, here is the more up to date github repository (https://github.com/Gabriel439/post-rfc/blob/master/sotu.md).


> Haskell code tends to be of high quality by construction, but for several reasons that are only correlated; not causally linked to the technical merits of Haskell. Just by virtue of language being esoteric and having a relatively higher barrier to entry we’ll end up working with developers who would write above average code in any language.

I think this happened to Ruby for some time, not anymore probably. Maybe the next language to get the benefits of the "experienced early adopters effect" will be Elixir.


I think it's already happening with Rust. People are doing really cool things with it.


Any top apps?

I don't think you can beat other languages yet, except in some very niche areas. There's not even incremental compilation!

Hell, even python can infer function types these days!

Rust has been around much longer than Swift for example, and yet there are many very more production Swift apps. Whereas rust seems limited to play stuff.

Of course, given another 2-5 years it should catch up in many ways. Comparing it to mature languages, or languages like Swift (from the worlds largest company paying some of the worlds best language developers) is not really fair.

Definitely a play ground for enthusiasts at this point. Which I guess is what you meant.


Dropbox has Rust at the core of their product now. We have a fair amount of production use.



The same person behind that linked tweet has openly remarked about Dropbox's Rust usage, see for example here: https://www.reddit.com/r/programming/comments/3w8dgn/announc...


They do use a lot of Go, yes. Doesn't mean that they can't use Rust too. The two languages are good at different things.


Swift reached 1.0 a full year before Rust did. Just because Rust was developed in the open and Swift wasn't doesn't mean that Rust has been usable longer than Swift. :P


That said, Rust 1.0 was way more usable than Swift 1.0

Just because something says 1.0 doesn't mean it's usable :P


> Strings The strings types are mature, but unwieldy to work with in practice. It’s best to just make peace with the fact that in literally every module we’ll have boilerplate just to do simple manipulation and IO.

Conversion is part of the hassle, the other part is not having common functions (like, say, "splitPrefix") that will work across all string-like types.

For this, I recommend the monoid-subclasses package which, among other goodies, offers the TextualMonoid typeclass, that has instances for many string-like types.

http://hackage.haskell.org/package/monoid-subclasses-0.4.2/d...


Is there any good article on how Haskell got into a situation where there are multiple different string types? I understand how it happened with C/C++ (e.g. on Windows), but Haskell is much more modern than that.

I feel there must be some sort of story or interesting thing to learn here. Or is it just the usual str vs widestr type problems?

Also was interested to see the comment about huge records and the memory pressure it can cause. Seems like that's really an issue with immutability. I was expecting the author to provide some sort of advice or workaround for it, but apparently not.


I don't know an article, but the overall situation is pretty straightforward to understand.

The original Haskell strings were linked lists of characters. This was simple and elegant and worked well with the functional programming approach of the time (1980s, by the way, so maybe Haskell in its origins isn't quite so modern as you think). Nobody was much concerned about high performance string operations in Haskell at the time.

Inevitably, later people wanted to add more performant string types. But should they be lazy or strict? And do you want an abstract representation of Unicode, or do you want something more immediately suitable for arbitrary binary data? Enter four more string types. And now here we are with 5 string types in common use.


Thanks.

By "should the types be lazy or strict", you mean, if you do a string operation like uppercase/lowercase/substring/replace/etc, is that operation lazy or strict? Or are you meaning decoding bytes to characters or ... what aspect of the type itself can be lazy or strict?


Yes, the issue is whether the various string operations are lazy or strict. But whether it's possible to implement lazy operations does depend on the type itself. If the type is implemented as just an array of bytes in memory, it would be impossible to do anything lazily, because there's nowhere to store thunks (unevaluated values), only data.


I don't know about such an article, but Haskell's basic string is a linked list of characters. Nobody likes that.


I am not a professional developer (never gotten paid to write software) but now do some prototyping code in Haskell so real engineers can pick it up from me later.

For me, Haskell has been a huge productivity booster. It forces me to write good code and 90% of mistakes that I normally make are all caught at compile time. I use servant for creating my API server and it is pleasure to use (being able to create APIs declaratively is very easy for me to understand as a non professional). I recently had to write some front-end code in JavaScript and I noticed how simple mistakes can take 1/2 hour to debug (you pass in an array of objects instead of an object and you will wonder why `for (k in obj)` logic is not working.

I refactor huge parts of my code - can gut a particular API endpoint response and rewrite it without worrying about introducing more bugs. This allows me to start writing code without a lot of planning and refactor as I go instead of spending a lot of time upfront thinking through the implications.

Using Hoogle or Hayoo to search for functions based on types is just simply awesome. If you want a function that returns the index of a list that satisfies a predicate, you don't need to come up with search words instead you just look for it based on the types like so: http://hayoo.fh-wedel.de/?query=%28a+-%3E+Bool%29+-%3E+[a]+-.... This does not mean you can solve all problems this way but it is very quick to get easy answers than Googling or putting it on SO.

Most developers write their APIs and need to spend more time documenting it. With servant I just derive documentation directly from the types so my documentation moves in lock step with my code.

That said, using Haskell has its challenges (especially as a beginner).

* Compile-refresh cycle is frustrating. With Python or Go, you make a change and refresh the page, it is there. With Haskell, the 30s wait can become painful. My code is organized into modules but it is hard to avoid dependency related compile issues -- I tend to put most of my data records (mostly related by business logic) in a module and if you add anything to this module, you will end up recompiling everything that depends on it (which is most of the code). I am sure a pro-Haskeller would do it differently but whatever suggestions available online is not steering me in the right direction.

* Starting out with a fresh framework is not trivial for a beginner. servant is really nice because it came with very good documentation and also a tutorial by someone on how to use it with wai (http server) and persistent (an ORM). Without it, I would not have been able to get to where I am now.

* Not all libraries are created equal -- some of the common use cases are well documented or has enough documentation that you will be ok. But once you venture beyond common use cases, trying to use a library is not easy. Most of the time, types help and being able to test it in the REPL is nice but it can only take you so far. I recently looked at a library for creating a MIME string but got lost in some undocumented library.

* Haskell's community is great. You will always find someone to help you. However, as the author was saying, you want to get some work done and you cannot wait 2 days for an answer on Reddit or SO. IRC is great for specific libraries (say servant or Yesod) but general #haskell can be daunting -- 90% of the people seem to be discussing Lenses or Free Monads and your stupid type error is not going to get any attention.

* Easy parts of Haskell are very easy to pickup and the hard parts seem impossible to fully comprehend (I will think I understand Monad Transformers and will try to implement some logic but will get stuck in type check hell and quickly give up).

Btw, I did not know about fieldLabelModifier until now and it will cut down a lot of boiler plate. That said, I cannot wait for GHC 8 and its Duplicate Record Fields! Records are really annoying for writing business logic driven API servers.


A workaround for long compile times is trying to stay withing GHCi whenever possible and reload from there.

http://chrisdone.com/posts/haskell-repl

http://chrisdone.com/posts/making-ghci-fast

The Haxl team seem to use this strategy:

"Haxl users at Facebook do a lot of development and testing inside GHCi. In fact, we’ve built a customized version of GHCi that runs code in our Haxl monad by default instead of the IO monad, and has a handful of extra commands to support common workflows needed by our developers."

http://simonmar.github.io/posts/2016-02-12-Stack-traces-in-G...


I find ghcid very useful. It reloads your code automatically when it detects a filechange.

https://github.com/ndmitchell/ghcid http://neilmitchell.blogspot.de/2014/09/ghcid-new-ghci-based...


#haskell has always helped me, provided that I did my homework first. If they don't tackle your question, wait a little while and break it down into smaller pieces. It makes life easier for everyone, and usually allows them to help you faster.


Fantastic response.

In regards to changing data records I've suffered the same problem as well. At best I've isolated what I can into separate modules so the impact is smaller after a small change, but obviously isn't universally applicable.

Note that there is also a #haskell-beginners IRC channel where it's sometimes easier to get a responses to certain questions than the much broader #haskell channel. Both channels are full of incredibly nice people for sure.

I too did not know about fieldLabelModifier and am excited about the GHC 8 changes as well!


Thank you. This is the reason I read HN.


I don't agree that TH is bad. It's bad for generating real code (like I wouldn't use it to generate complicated logic), but just like you don't write Eq,Show,Ord instances yourself and instead let the compiler derive them for you, you use TH to generate JSON instances. I see TH more as a extension of deriving so I can easily derive custom typeclasses than for generating real code.


For Aeson instances you really should be using the Generics. I do agree though that there are useful libraries (AcidState/SafeCopy is one example) that cannot reasonably be used without TemplateHaskell. While TemplateHaskell should be avoided if possible, it isn't the only consideration in choosing a design and dependency tree.


The article does not go into detail why TH is bad, and neither do you. Why should I be avoiding it? What are the exact problems with it, except the occasional breakage with new GHC releases?

Generics is also not without its faults (it inflates compile time/memory, for example, see problems surrounding aeson 0.10). If the article is correct when it says that /[TH is] a eternal source of pain and sorrow/, then you're trading one pain for another.


As a maintainer of a >130kloc Haskell codebase spanning thousands of modules and hundreds of packages, I avoid TH primarily for one reason: slow builds. Painfully slow.

For cases where GHC.Generics is an option, I always take GHC.Generics over TH. Yes, Generics can be slow at times too but this somewhat controllable with various GHC_OPTIONS. Either way, there is no loader and interpreter overhead... and a lot of Generics code compiles in less time than it takes TH just to load `text` or `aeson` ;-).

Other reasons:

  * Some TH libraries add implicit dependencies that are not tracked by the build system
  * GHCi/TH's loader is different than the OS loader, sometimes causing failures if a TH dependency touches C++ libraries
  * No cross compiling ability... not an issue for me but it's painful for those targeting Javascript (via ghcjs), ARM or smaller x86 embedded systems


This case seems to increasingly be handled by DeriveGeneric. I realise the performance isn't quite the same, but so far my experiences of GHC generics have been pretty positive.


What's the performance overhead of instances based on Generic instead of TH? In PureScript for example generic instances are so slow it makes them unusable.


I wonder how much time is spent on deriving instances and whether some sort of caching would make a big difference. After all if datatype definition has not changed there's no need to change the derived code.


Use PureScript and node in production instead. Great performance, interop with JS, and it's eagerly evaluated so runtime semantics are easy to understand. PureScript does not have Haskell's record problem.


I arrived in Mountain View tonight. I want a job. It must involve Haskell. Anything else is a waste of time. My prediction: Haskell will be ready for production when people are ready to engineer code that lasts 1000 years.


If the language you want to work in is so important, you should have picked the company first and the location second.

Besides, data outlives code. By orders of magnitude. You seem to be in love with Haskell today, chances are you'll be using a different language in ten years but the data you'll be working with will probably have been around for much longer than that.

Don't fall in love with programming languages, it's a waste of emotional energy.


That's a very insightful comment. I 100% agree. And I'd add that I'm still completely baffled to see that some very obvious data support issues are still not resolved in the industrial world (e.g Java) : * Computing with units (which is tough to put in a language, but so is compilation) * Computing with dates in a sensible way

Of course one can make computation on those types, but it is so un-natural that it scares me :

* BigDecimal for currency, let me laugh * Date/timestamp without proper casting rules * Types towers with inheritance, generics, etc. Pfff... * Still no fine library to represent an address * Representing mutable ordered lists in SQL databases is still quite painful (possible, sure, but there's so boiler plate code to write)

So the data representation/manipulation problem, which goes with the data longevity you observe, that's something to learn about...

There's no reason to love current programming languages :-( (but I do love Python :-) 3 of course :-))


> Computing with units (which is tough to put in a language, but so is compilation

F#'s units are nice.

> Computing with dates in a sensible way

The new Java time API is pretty good. But honestly my favourite datetime API would have to be Postgres'.

> BigDecimal for currency, let me laugh

Out of curiosity, why?


For F#, I didn't know. It looks verrrrrry sweet.

For BigDecimal, my grief is that it makes code horrible (at least in my experience, I'm still locked in JDK 1.7).

BigDecimal has the problem I see with the rest of my griefs : it is possible to compute things correctly with code (obviously), but the way the code is written is ugly (and painful).

   BigDecimal yearly_amount = new BigDecimal("1000.00"); 
   BigDecimal daily_amount = yearly_amount.divide(new BigDecimal("12")); // Beware the rounding issue
Wouldn't it be nicer if :

   money<4> yearly_amount = 1000.00 EUR; // 4 decimals 
   money<2> daily_amount = yearly_amount / 12; // Beware the rounding issue !
Of course, this notation doesn't help much with rounding, but at least it makes the code easier to read. And trust me, using BigDecimal, doesn't prevent many people of making mistakes with rounding.


True, but it can do, if you set precision etc.


I fell in love with Haskell for quite some time. In fact, I still really enjoy the language. But after a while I noticed that I was not in love with Haskell itself but some of the things it brings to the table which are very important. Sure there may be new languages in 50 years in the mainstream, but let's hope we learn from our past successes and failures. In that sense, it pays to care about languages today as they will directly impact the future.

If we all give up and just use mainstream languages, then the future will be filled with copies of mainstream languages.


come work for me. you'll have to work remotely cause a mountain view office is still in the planning. i will gladly compensate you for your efforts at a rate of 5 cheetos an hour.


"when people are ready to engineer code that lasts 1000 years"

Actually, as silly as it sounds, I write most my hobby code in C++ because of this. Well, not for a thousand year span but for longevity anyway.


C++ if far too volatile. Stick to Common Lisp instead, it is a living fossil.


"Living" being the a key word here.

I was astonished when I started learning Common Lisp last year: the language, despite its warts, feels modern and convenient, and the ecosystem - SLIME, asdf & quicklisp - is impressively well designed and surprisingly nice to use.


Yes, CL does have its share of historical baggage and warts. Despite those, I think it's still the best dynamically typed language. The rich number types, with arbitrary-precision integers by default, single- and double-floats, ratios, and complex numbers all built in; macros; and multimethods -- CL still has major features that the other dynamic languages, rather inexplicably to my mind, have yet to embrace. It's also much faster than Python or (especially) Ruby, and would be much faster than JavaScript but for the massive amount of effort that's gone into JIT compilation of this now-ubiquitous language.

If you're coming from a Haskell, F#, or even Scala or Ruby background, you might appreciate my functional collections library for CL [0]. It expands the range of code that can easily be written functionally.

Even if you don't have that experience, you might find FSet interesting, but it will take a little longer to wrap your head around.

[0] https://github.com/slburson/fset


> it's still the best dynamically typed language

One of the best, for sure. There are Racket and Clojure in the Lisp family, both comparable to Common Lisp in terms of features. Outside of that, there is Elixir, which in my mind gave Erlang enough additional features to also finally be at that level. Pharo is another language - a modern Smalltalk-inspired, image based GUI environment - which comes close in terms of expressivity, but is nowhere near CL implementations in terms of stability. There's also REBOL and recently also Red. They all differ, of course, but what they have in common is the amount of features such languages offer and the unique kind of synergy you get from each of them. It is still unmatched by currently mainstream dynamically typed languages.

> you might appreciate my functional collections library for CL

Thanks, looks interesting. I'll take a look later. I think one of the most irritating things when I started learning Common Lisp was weird naming of common functions. In Emacs Lisp-land we got dash.el and s.el which made the situation (mostly) better; similar thing was done by Underscore.js. I think something like this could be helpful in CL - maybe your library could fill that role for me.


There is a reason the Catholic Church uses Latin for important documents. The fact that Latin is a dead language means that the meaning of those documents won't change over time.


Thanks, but I'm not very productive in Lisp. Lisp development environments need emacs and emacs and my brains are not compatible. I've tried the 15 years to pick it up time and again but to no avail. Last time I even made a cheat sheet for myself (a link here as a proof of effort):

https://www.dropbox.com/s/ceiijwn8s31j8hv/emac_memo.svg?dl=0

Given the platform availability I'm pretty sure C++ will eventually reach the living fossil status Lisp and Fortran have.


With all the activity with C++ development it'll hardly ever stabilise. Ever since C++11 it keeps changing at a frightening pace.

And you don't really need Emacs for Lisp, there are multiple other options (but yes, I'm using Emacs for C++ too).

Btw., quite a nice cheat sheet. What did you use to make it?


What's the best alternative to emacs for common lisp development?

"Btw., quite a nice cheat sheet. What did you use to make it?"

Thanks. Inkscape.


LispWorks is really nice. I've seen some people using Dandelion with Eclipse (although never tried it myself). It's also possible to interact with SWANK in vim, if you're into such things.

> Inkscape

Wow. Never even thought of it in such a context. I'll definitely give it a try.


Lucky for you, Mountain View has Haskell startups!


Question 1: Do you actually know Haskell?

Question 2: How the F#*$ are you a software developer in Mountain View without a job?


On question 2: "I arrived in Mountain View tonight."




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: