Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Haskell in Production: Channable (serokell.io)
148 points by aroccoli on June 28, 2022 | hide | past | favorite | 73 comments



There is little information in this interview about why they chose Haskell except the general notion that they liked it and maybe that it was faster than Python for their particular task. (I say "maybe", because rewrites generally tend to be faster, since you understand the domain much better.) Needless to say, there are many languages out there that are faster than Python. An obvious question should have been "why Haskell instead of C#/F#/Go/Rust, all of which can be made very performant and have better tooling". For that matter, why not Pharo? Or D? Or Factor? Again, the article hasn't really identified any specifics except "we like it" and "there is a university that teaches Haskell nearby".

I find such articles (and HN gets a lot of them) vaguely disturbing. They are PR pieces rather than engineering material written to influence rather than inform.


Check out the hyperlink in that paragraph to the original blogpost for some more background. But I don't think it will be more satisfying as it was still a rather arbitrary choice I will admit (https://www.channable.com/tech/how-we-secretly-introduced-ha...)

The answer mostly is: Because I liked Haskell and knew it well; and I and a colleague wanted wanted to rewrite the job scheduling system and just picked it. But as soon as we started on the project we were extremely productive and were able to launch a proof of concept within days. That really got us excited and showed us a lot of potential in continuing with this approach. Especially things like QuickCheck and testing of pure functions allowed us to write code quickly and correctly with ease.

Also most developers (and also leadership) in our team at the time already followed 1 or maybe even multiple university courses involving Haskell so we had quite a bit of 'hidden' expertise.

For the next project; the API gateway; I honestly felt it was an even better fit. Haskell's Web Application Interface and its ecosystem of middleware is top notch and makes it really easy to write very performant web proxies and servers. Though if I'd have to make the same decision today I'd probably pick Go over Haskell given Go has an even wider ecosystem of middleware; and has a way stronger story surrounding cryptography in which Haskell is severely lacking


Working on that API gateway with you was a lot of fun :)

To be fair, we did run into some issues with the GHC garbage collector performance initially. That took some time to figure out and wasn't the easiest thing ever. Like all tools, there are rough edges sometimes.

I still maintain that the Haskell we wrote at the time was pretty cheap in terms of operational load / bugs to fix (especially compared to the systems that they replaced). When I was back at the office for a reunion, I heard that things were still pretty nice in this respect, but maybe someone still at Channable can chime in with more recent stories! (Or complain to me about the code I wrote back then)


> why Haskell instead of C#/F#/Go/Rust

Keep in mind that this was 2016. Rust was a lot more niche back then, much of the tooling that we have today didn’t exist yet, and nobody was seriously using it in production; encoding_rs landed in Firefox in September 2017. Go only gained generics this year, and as for C#/F#, Channable runs and develops on Linux. .NET Core was only a few months old back then. The more serious contender was Scala, which is what the feed processing system was written in at the time.

A big reason for Haskell was that Arian is very skilled in it and we both liked it. But Haskell genuinely was a good choice for a domain-specific scheduler, because you can really concisely and almost declaratively express the core logic. Also, testing with QuickCheck is great (and Hspec, and testing pure functions in general). The application is written with a pure event loop at the core (it doesn’t do any IO inside the loop), which is something that is very natural in Haskell, but difficult to do in other languages unless you are extremely disciplined about it. That in turn makes it possible to very easily and quickly test all kinds of rare corner cases by just writing down a state value, listing events, and an expected final state. We also had QuickCheck synthesize events and test our invariants. Also, what is really nice to do in a GC’d language with persistent data structures where you need an http API, is to have one event loop that publishes a new state to an MVar after every iteration, and make the http handler sample the current value and work with that. That way you can avoid locks so reads from the API never block, and readers don’t block state updates. STM in Haskell makes this a breeze. It is possible to apply this approach in Rust (and I often do because it’s much easier to reason about than locks), but you end up cloning most things, with persistent data structures you get sharing.


Sounds like Clojure would alo have been a good fit (STM, persistent datastructures, functional)


Choice of language is never purely technical because almost all popular ones are good enough for most use cases. So the choice becomes down to preference and experience on behalf of the engineers.


> why Haskell instead of C#/F#/Go/Rust

Thank you, I'll keep that question in mind for future interviews to keep it more specific for some of the people here. :)

As to the rest -- our interviews are created as much to share information between Haskellers as to inform other people about Haskell.


Channable has a tech blog where you can find more details.

The most popular ones discussing technology choices (with HN discussion):

- Haskell (https://news.ycombinator.com/item?id=13782333)

- Nix (https://news.ycombinator.com/item?id=26748696)

You can find more posts on:

- https://www.channable.com/tech

- And the HN discussion with this Algolia query: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

Hope those are useful to you! :-)

Disclaimer: I led DevOps at Channable for a while, but I no longer work there.


All tech choices are in reality based on the whims of the developers who make the decision. I see developers make so called “rational” decisions all the time but other developers in the same situation would make very different “rational” decisions. The reason is because different developers value things differently and no amount of “rational” decision making can change that. It’s exactly the same reason why we have politics. People value different things and no “rational” algorithm can fix that.


Why wouldn't "we like it" be good enough?


> we also encountered some quadratic runtime algorithms in the memory allocator of the RTS – and even compact regions didn’t help then. We documented that particular bug and have since provided a fix for it as well.

Just like you never want to be the smartest person in a room, you never want to be the one pushing a language to its limits.

IMHO when you’re providing language patches you probably choose the wrong language.


> Just like you never want to be the smartest person in a room, you never want to be the one pushing a language to its limits.

(Channable co-founder here) If everybody took this view, then a language could never improve. I have actually been very impressed with how far we have been able to take Haskell before having to push its limits. Anecdotally, we previously were using Scala (and Spark) and ran into issues much earlier. In those cases we were pushing the limits of Spark, but underlying those were the limits of the JVM.

> IMHO when you’re providing language patches you probably choose the wrong language.

This is a very myopic view. There is much more to consider when choosing a language (what kind of team do you have, what kind of problems are you working on, maintainability, performance, ecosystem, hiring, etc.).


I really appreciate your view on all this, and I totally agree with you that the other commenter has a very nearsighted perception of things.

I think the go-to example of a group taking this to the extreme is Jane Street with OCaml. For many intents and purposes, Jane Street are really the major shepherds of the language these days. They contribute tons of compiler fixes and improvements, and have written their own replacement of the standard library (which is used by many people). I think it'd be silly to say that they "chose the wrong language"; rather, they've invested in a language ecosystem that benefits them and reflects their needs, while at the same time growing and improving a whole language community. That's an awesome feat.


The difference here is scale: Jane Street is a 20 year old company with over 1000 employees, Channable has maybe a fifth of that. The proportion of engineering effort that will have to go into language expertise will just be higher at a smaller shop pushing the bounds and this will simply take away from feature development.

I've worked writing Haskell in multiple shops, seen it be successful, and also seen it fail to many times. This is a huge issue for adoption: that an engineering team using Haskell will have to devote a significant amount of time/effort to tooling/resuability/infra that would otherwise be available to directly deliver value to customers if they had gone with a different language. Stuff like pagination, custom json/typescript encodings and route generation, organizing the code base so complication time isn't a nightmare, these are all "inner source" projects I've worked on that needed to be done for us to use Haskell for a decent sized team. The fact that Channable had to go down the road of ghc-compact regions, to me, suggests that on a strictly technical basis they would be much more productive using a language like Rust of C++. Better for their end users, and better for their investors.

Of course, large companies can make these deep investments into languages, tooling, and infrastructure (and by all accounts Channable is well on their way) but when they do it's only a small proportion of their entire dev team. For Haskell, I contend that its' lack of state of the art library support and tooling is a massive impediment to wide scale adoption: companies just don't want to put 1/3 of a 30 person dev group on a "platform" team in order to be productive.


Jane Street's been doing this for most of their existence. I interned there a decade ago when they had 300 people and maybe 100 engineers and they had already built up an impressive internal infrastructure—including lots of in-house tools (eg a code review system) that were custom because they got value from building them in-house, not because of OCaml. At that point, they had been on this trajectory for a decade already, certainly well before they had a massive team of developers. I forget the exact details, but they started using OCaml in the early 2000s when the entire company had a single-digit or maybe low-double-digit employee count.

Building internal tools, libraries and expertise was clearly a massive net gain in productivity for them—it did not "take away from feature development" in anything but the most superficial and short-sighted accounting. Most recently I worked at Target and the contrast was pretty extreme: on a lot of fronts, Jane Street had better in-house systems than Target had with thousands of tech employees pushed to use mainstream technologies and existing open source systems. It's obviously not a 1:1 comparison, but seeing how productive a relatively small organization like Jane Street could be despite (or, really, thanks to) building a ton of things in house really got me to question the traditional wisdom here.

Of course, everything you've said is a real obstacle when that's how managers or other teams think, regardless of whether it's accurate! Dealing with perception and legibility is definitely one of the difficulties in trying to have a Haskell team in a big company. "Nobody got fired for IBM" isn't so much a cute aphorism as corporate gospel—even if they probably should have been.


Any team is eventually going to have to invest in tooling. If they had gone with C++ they would be fighting different fires: memory safety problems, build tool complexity, bloated compile times, memory leaks, etc. If they went with Javascript; run-time errors, async memory pressure, complex deployment artifacts, etc.

Haskell has benefits for its trade-offs. Its type system is world class. I get a lot of leverage out of its type class constraint resolution. It has a sufficient compiler that generates good, fast code and a run-time system that makes dealing with concurrency much safer and easier than other systems I've worked with.

You can choose any language we have out there right now and a decently sized engineering team, if they're smart, are going to have an internal team or some amount of their backlog of work dedicated to tooling. Even Twitter has a tooling team! And Java has an ecosystem where millions of dollars of full-time engineers are building IDEs and DUX tools constantly.

It's the nature of the beast.


yes, but I'd argue the "buy vs. build" balance is shifted too far to the build side right now when you build all your backend services (or most of them) with Haskell. In the Haskell ecosystem there's a good deal of solutionism, but this can be solved with diligent decision making and good technical leadership, the greater issue, IMO, is that Haskell lacks the breadth and depth of libraries that allow Haskell to be a low risk language choice that allows you to build a project, see how things go, then restart with a different team/assumptions if need be.

I want Haskell to win and solve these problems: I've invested years of my life learning the ecosystem and actively maintain open sources libraries, but after years of using the language and running into problems, I've lived through the situations when Haskell is less than stellar as a software engineering tool and it's quite disappointing to see.


I think the only solution here is to dump a war chest of cash into the ecosystem.

One the reasons Java and C++ are incumbents is due to the network effects that made them what they are today. Oracle literally spent hundreds of millions of dollars convincing developers Java was the next big thing. C++ enjoyed platform support by being compatible with C and was embraced by big players with deep pockets.

Haskell doesn’t have a shot at becoming “a safe choice” in that regard. It's not an exclusive language to a desire-able platform, it doesn't have an organization with deep pockets to fund its development, it doesn't have a killer application.

But it is an oracle for the future of where programming is headed. Many languages are trying their best to steal ideas from Haskell/FP: immutable data structures, pattern matching, lambda functions, lack of null, constrained parametric polymorphism, etc; patterns from libraries like monads, streams, lenses, functors; all making their way into Java, C#, etc.

That being said I haven’t felt that the ecosystem is lacking core libraries for almost anything I’ve been working on. The IDE support is still lacking and profiling tools are there but lacking shiny DUX. Otherwise I can’t really complain.


> I've worked writing Haskell in multiple shops, seen it be successful, and also seen it fail to many times.

Can you share some of the successes, failures, and composition of teams Haskell experience?

Also any success/failure causes you believe they had in common.


Would you even know what Jane Street was if they didn’t stand out in this way?

Seems like a good way to get smart people interested in working with them.


Calling compilation time "complication time" is such a good typo


That's how I see it as well - Haskell has been very beneficial for us as a company, so it's nice if we can also contribute something back to the community.


Can you talk more about what kind of limits you hit with Scala and Spark? I definitely have my own biases, but my feeling is you get 95% of the good stuff of Haskell while avoiding most of the problems mentioned elsewhere in this thread (e.g. you get a proper IDE-integrated debugger with breakpoints/stepping/etc. that Just Works).


You could be correct, or you might have an exaggerated preconceived notion on how hard it is to provide language level patches because of how the industry was in the 90's and early 2000's.

Due to chance I've worked in a company where a colleague had patched the language we were using to save some memory. Back then I (and many people in the Ruby community) considered him a rockstar for doing so. That praise definitely was deserved, since he's done all sorts of cool stuff in his career, but in retrospect that while bold, this feat wasn't all that inconceivable. The Ruby interpreter isn't some magical blackbox, it's got all the features we've learned about in university neatly organized in C files. Somewhere in there is a garbage collector, it manages memory. You can spend a week or so and understand its basic workings. Making it behave slightly different for a specific use case isn't such a moon shot, I believe any smart and experienced engineer could do a proof of concept in a week or two.

I'd like to flip it around. If you're coding on a platform that is so complicated that you couldn't patch it if you needed to, then you're on the wrong platform. That platform is a liability to your company, a liability that you might have to pay off by getting a support contract.

That said, I'm pretty sure the Haskell runtime is very complex, so it might still be the wrong language. I'm just saying, I remember the early 2000's, and if you ran into a problem with your .Net or Java runtime, or god forbid Windows itself, you better be working at a big fat company with a nice support contract, or you'd basically be screwed.


> That said, I'm pretty sure the Haskell runtime is very complex, so it might still be the wrong language.

While the Haskell RTS as a whole is a big (and marvelous) piece of engineering, it is not actually necessary to understand all of it to contribute some smaller patches.

The RTS is written in fairly readable and clean C code and it is possible to make local changes to e.g. the memory allocator without having to touch code in lots of different places.


I don't think I've ever seen a large project that wasn't pushing the limits of its language in one way or another—just with popular languages, the language is treated as a given and people jump right to finding workarounds or tweaking their own designs rather than even considering contributing to GCC or Python or whatever they're using. (Not quite general-purpose languages, but don't get me started on the pain I've had with Hadoop and Hive... but trying to change those was, somehow, never on anybody's mind.)


That is the most business-efficient course, but it is also an excellent way to ensure you have no true expertise in your company when a gnarly problem eventually strikes. Avoiding the hardest problems does not attract the kind of people who like solving hard problems, after all.


> IMHO when you’re providing language patches you probably choose the wrong language.

In your world, who does provide language patches then?


It takes guts to be amazing

It's fine not to have guts though

But it is a lot more fun to have them :)


On a more concrete note:

Your conclusion that needing to hack on the compiler is a sign you made the wrong decision is a bit iffy

I've had ghc cloned and building on my computers for years. It's easy.

Seems like a cultural difference driving your opinion. Nothing more than that. Maybe that means you dislike the culture?


I don't know why but the first thing that came into my mind was this video. Btw I love Haskell. https://www.youtube.com/watch?v=ADqLBc1vFwI


Glad to see the cave-man like debugging is brought up.

Haskell stops many classes of bugs, but lord help you if you've made a complex logical mistake somewhere along the line and are trying to figure out where you went wrong. It's high school printf debugging all over again except you can't even reason about order of execution due to lazy evaluation.

I'm reasonably proficient in a number of languages, including Python, JS, C, C++, Prolog, Java, I even have a decent amount of Forth under my belt. Nowhere else have I experienced the complete dearth of tooling that is the Haskell ecosystem. The LSP is a step in the right direction but merely a first step.


It’s especially bad given how much more tooling is in-principle-doable in haskell compared to other languages. In python, you can’t really reason about shit because of all the black magic you can incant. But tooling does a really solid job at getting really close. In haskell, you can absolutely reason about nearly every tool you could dream up, but none of them have been built into something really useable.


Actually I think debugging is fundamentally much harder in a lazy language. A line by line debugger would be nice to see but it would jump around all the time when it's evaluating stuff which I don't think would make for a very nice debugging experience.


In pycharm, when using generators and yield and that kind of thing, you get exactly the experience you’re describing. It actually makes for a totally fine debugging experience.


Note that the repl (ghci) has had a time-travelling debugger since ~forever https://donsbot.com/2007/11/14/no-more-exceptions-debugging-... What's missing is the same for your compiled binaries, with UI for inspection etc.

Maybe https://hackage.haskell.org/package/haskell-debug-adapter will some day be non-experimental.


> It's high school printf debugging all over again

If your C++ application is even semi complex or uses Qt you are basically in printf land again. I mean sure go ahead use the debugger for stacktraces and such but anything beyond that is finicky.


In 10 years of C++ I have never dropped back to pouring over logs once I have an idea of where the problem is. Drop a code breakpoint, inspect the stack, find the variable that ain't what it's supposed to be. The next step is to watch every change to that variable over its lifetime and figure out what went wrong.

The dumb way to do this is to drop printfs in the code on every state change of that variable. The tooling way is a data breakpoint. If you have a decent test suite for the codebase that can replicate the fault, the data breakpoint is IMHO always faster.

If you can't replicate the fault, you're going to need logging. So logging isn't obsolete in all cases, but with Haskell (at least in my experience) the excessive logging approach is your only option really.


With time travel debugging, data breakpoints become even more useful, because you can backtrack to the root cause once you've recorded the fault (rather than needing to run forwards and hope to replicate a corruption at the same location as last time).

The GHCi Debugger (https://downloads.haskell.org/~ghc/7.8.3/docs/html/users_gui...) mentioned in another comment (https://news.ycombinator.com/item?id=31911462) can time travel but I don't see anything like a data breakpoint.

I'm not quite sure what a meaningful functional equivalent would be given, semantically, no state is actually mutable in Haskell - but "where did this value come from?" still ought to be useful.


> If your C++ application is even semi complex or uses Qt you are basically in printf land again. I mean sure go ahead use the debugger for stacktraces and such but anything beyond that is finicky.

UDB, the time travel debugger I work on, supports C++ via GDB and there seem to be certain things that make life really awkward.

One of my team has been looking into the combination of GCC's debug info and and GDB's behaviour around inline frames. There are a number of quirky things we see there, which boil down to "inline functions make the debug experience go wild".

My understanding is that GCC isn't providing as helpful debug info as we'd like but GDB also isn't handling it in the best way. There are corner cases that can be very confusing from a user's PoV.

Even in C, this is quite visible. In C++, where inlining matters even more, we'd expect it to be exaggerated further.

I'd be really interested to know what things you find finicky (and are you even Linux / GCC / GDB, or is this a wider problem?)


I think you are exaggerating for some reason.

You said you have decent amount of Forth and Haskell for some reason not even compare to Forth.

The very fact that you can have typed Forth in Haskell shows that type system is a very good tool, much better than what you get nowhere else.

I managed to learn and achieve more from using Haskell than almost any other language, per effort spent. This is mainly due to tooling and libraries which I find the top notch.

And, of course, I am also reasonably proficient in different languages, including Python, JS, C, C++, C#, OCaml, Java, Prolog, a dozen of assemblers and couple of hardware description/design languages.


I have had some success stuffing a ReaderT [String] in my monad stack to create traces with a custom flow using:

    local ("a tag for the code block":) $ BLOCK.
The idea is that when a mistake is detected you read the trace and throw that along with whatever error information is available. It is a lot better than printing stuff, because it is independent of evaluation order, and follows the structure of your monadic code.

It is quite easy to set up and use. It is really just two functions and an extra level in the monad stack.


Well, Leksah used to be a good experience in regards to debugging.

https://github.com/leksah/leksah


… and rampantly introducing the IO monad only to remove it once the culprit is found.


Why not use Debug.Trace?


Absolutely second this.

Debug.Trace can use event log of RTS which is so cheap it is hard to believe.


On rare occasion you actually need to introduce additional sequencing to work out what's going on. In that case, you might well want to keep it after the fact, though, since your fix might depend on that sequencing.

But yes, most of the time that you need something like printf debugging, Debug.Trace works great.


If you need sequencing you can use `seq` or even `deepseq`


I think you can enforce eager evaluation in Haskell.


But then you can’t take advantage of the library ecosystem that assumes lazy eval.


You can enforce it per module, per function, per datatype or even just at a single site. Enforcing it wholesale is using a nuclear bomb when you need a scalpel.


If you liked this interview, we have plenty more interviews with practical Haskell users on our blog!

https://serokell.io/blog/haskell-in-production


My question is, 'how much of this work could have been shoved into an SQL database which would give you the type checking and referential integrity needed' , with then a relatively thin layer of non-SQL programming on top to handle the ETL tasks mentioned?


Every seven years or so now, a new generation gets suckered into believing Haskell is the Right Way


I think almost everyone who writes Haskell has in the past written in other languages, and clearly those who stick with it prefer it.

I'm personally far more productive in Haskell than in any other language I've tried. I don't know what's the Right Way, but it's the Best Way I've found so far.


It is not. It is just sometimes better than the alternatives.


I program Haskell for all projects because it's fun - not because there's some obvious engineering value-proposition.

I'd rather not waste my consciousness-hours on boring languages.


What is it that makes it so bad?


It’s an academic language, designed as a substrate to grow papers in.

If JS infamously has dozens of build systems, Haskell has almost nobody working on tooling. Instead you get many smart people writing libraries for building super abstract spaghetti code - which is great for research and advancing the state of the art (sometimes the wacko abstractions turn out to be good ideas that trickle down into regular programming languages), but would someone please write some tooling as well.

I love Haskell, it’s fun, mind-expanding and absolutely worth learning. Maybe one day it will be ready for prime time, but until then I’d suggest an ML for “real” software.


An ML, like SML or OCaml? Neither of these has any better tooling. Really the only viable candidate would be F#, no?


Not sure I would say that all aspects of tooling are bad though. Stackage for example is great and I wish there was anything even close to it in other languages.


Being productive with it requires rewiring your brain, which depending on what kind of business you run might not be worth it more often than not (same applies to the crab language).


Like it or not, most problems you will want to solve are not "purely functional" but rely on sound management of shared mutable state.

Haskell advocates will say shared mutable state is done better in Haskell than other languages. They will trot out blog posts claiming that Haskell is also a better imperative language than other imperative languages. It isn't.

Just look at all of the theoretical mumbo-jumbo associated with something like Lenses...now look at the utterly trivial problem they solve.

Even for a FP tool, Haskell is riddled with cruft. Look at the set of compiler pragmas that are pretty much required to be in any useful Haskell. Including basic crap like overloading the String type. These are hacks, pure and simple.

You could go on and on and on. Most of the original Haskell advocates from the initial explosion of advocacy around ten years ago have moved on to Rust and other language communities. They gave up, and they were the gurus.

Anyway, like most programming hype, there is no stopping it...you just need to get on the Haskell hype wagon for a while, realize it is a waste of time, then get off and try to inform others.


> Like it or not, most problems you will want to solve are not "purely functional" but rely on sound management of shared mutable state.

This is a claim which is certainly not true in all domains. For example, web applications and compilers, two of Haskell's "killer problem domains", do not heavily rely on shared mutable state. That said, Haskell's approach to shared mutable state is no less sensible than the rest of the language.

> Just look at all of the theoretical mumbo-jumbo associated with something like Lenses...now look at the utterly trivial problem they solve.

With all due respect if you believe this then I'm not sure you fully appreciate the problem that they solve. The terminology is indeed obtuse though.

> Even for a FP tool, Haskell is riddled with cruft. Look at the set of compiler pragmas that are pretty much required to be in any useful Haskell. Including basic crap like overloading the String type. These are hacks, pure and simple.

That's not fair. Haskell is a language that is (de facto) defined as a basis, the Report, plus a set of modernising features which have been implemented since. They're not hacks, they are deliberate optional language features. It's a different approach to most modern languages but no worse. A combined set of such features, called GHC2021, is the new "standard" and enables a huge swath of them by default.

If anything I'd say there is very little hype around Haskell. People look at it and think it's cool, but that's not because of people hyping it up. In fact every post/article I've read about Haskell has been extremely measured and open about its shortcomings.

Sorry if you just wanted to get something off your chest, but it makes me sad to see a perfectly fine language disparaged unfairly.


I dunno. It's not a perfect language but it's sufficient. The bar is pretty low.

Type classes were first proposed around 1989 and Haskell adopted them early on. They made parametric polymorphism tractable. C++ finally adopted Concepts in C++20.

There's literally cruft in any useful language that has been around long enough. Keep searching for that diamond though. It's out there somewhere.


> Like it or not, most problems you will want to solve are not "purely functional" but rely on sound management of shared mutable state.

In theory this is correct, but incorrect in practice.

- Sincerely a real world Haskell programmer


Has the crypto crowd moved on from Haskell alread?


Haskell is fun to learn and play with, but it's almost never the right choice for a real business. Using a language you can hire for, that your employees can google solutions for because it's already a popular production language, and has the most popular packages and frameworks for your use case, is far more important than terseness and safety in 99% of cases.


I helped with recruiting and interviewing for a Haskell team in the past and, in our case, it was substantially easier to hire for our roles compared to non-Haskell teams in the same company and location (Target in Sunnyvale/remote). I figure there were two reasons for it:

1. A lot of people actively want to use Haskell, more than there are Haskell roles. Nobody is going to jump onto a generic Java role for the sake of using Java!

2. Language choice is a way to show rather than tell that you are willing to do something different and interesting. This ended up mattering even to non-Haskell candidates—I remember one very qualified OR expert told me he chose us over Google in part because the Haskell/PL angle to our approach sounded a lot more interesting.

When I later moved to a more traditional data science team at Target, the difference in recruiting was pretty striking. I used to field qualified inbound candidates all the time, while the new team rarely got any; the recruiting pipeline was noticeably spottier; when we did get qualified candidates, they were more likely to reject our offer. Programming language choice wasn't the only difference between the teams, of course, but it was probably the most externally visible difference to candidates.

With this experience, if I ever go start a new team or company, I'm going to use Haskell as a secret weapon to make hiring much easier. That goes against tech industry "common wisdom" but, as ever, common wisdom is far more common than wise.


I appreciate your point, but I also remember hearing that exact argument being used against Python in 2001. A Perl developer was explaining how Python was a nice language, but would never be able to make in-roads into business. It was too hard to find a Python developer and most programmers don't to want to learn a new language. Furthermore, CPAN provided such a wealth of packages that Python developers would have to implement for themselves. He did concede that Python code did tend to have fewer bugs, but the previous mentioned advantages, alongside the fact that it was much easier to write Perl than Python, meant that Python would never make business sense and Perl would settle into being the Lingua France of the development community. The guy's site even required a Netscape plugin that allowed him to script pages in Perl instead of JavaScript, which he stated was merely a stopgap until Netscape officially added a Perl interpreter into the browser.

This was about five years after industry representatives argued that my school should stop teaching C or C++ and focus on the more modern Visual Basic. They explained that, by the year 2005, there would be more COBOL jobs than C jobs. Meanwhile, a semester of Visual Basic experience would guarantee the students a lifetime of employment without learning a new language.

I'm under no delusions that Haskell will be the next Python or that Python itself will go the way of VB, but it always makes me take these "industry" arguments with a grain of salt.


>Using a language you can hire for,

A language like Haskell is much easier to hire for because you can get higher quality employees at a lower price, due to the high demand for Haskell programming jobs and low supply of such jobs. If you got for a language like Java, a double digit percentage of the candidates are the kind that can't even solve fizzbuzz without help, so you have to waste much more time weeding out mediocre candidates.


Haskell is easy to hire for.

I worked for a Haskell consultancy for 7 years and did hiring interviews. We filled open roles extremely quickly; the biggest problem was to decide which of the multiple high-quality applicants to pick.

I am now running my own startup and we just put out an internship job description [1].

Within 1 day we got over 20 excellent applications, and we'll probably have space for only 1 for now.

Maybe if you want to hire 100 Haskellers on the spot, you'll find hiring difficult. But most starting businesses don't have that problem.

[1]: https://old.reddit.com/r/haskell/comments/vklj9d/benaco_offe...


I find it somewhat amusing that this view is now gaining traction on the same site that was originally deeply influenced by "The Blub Paradox":

http://www.paulgraham.com/avg.html




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: