Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Continuations are useful monads too (for implementing coroutines or control features like Python's "with", etc).

As are parsers, uniqueness, randomness, non-determinism, readers, writers, regions, resource management, local-state threads (ST monad), probability distributions, software transactional memory, ....

Another nice thing is composing various monads to build a custom one for the purpose you need.

For example:

  StateT S1 (EitherT Err (State S2)) a
Will be a stateful computation that always has yields a new S2 value, even in exceptions (preserving any modifications up to the exception) but only yields an S1 in the case of success.

This is just one of infinite possible compositions of monad transformers.

If you hard-code a certain ambient monad into your language, you won't be able to use monads as a DSL for the use case you have at a certain time.

For example, in the application I'm developing now, I use a "Transaction" monad that guarantees that my key/value store transactions cannot do any IO or anything other than read keys and write keys. As a bonus, that means I don't even need transactionality from the underlying data store -- I can "revert" or "commit" by implementing the Transaction monad as a state tracker of all the changes on top of a hidden IO layer that exclusively allows reading keys.

This also means I can implement useful transactional primitives such as "forkScratch" which allows me to fork the transactional state, run an action in that forked state, discard its transactional side effects, and keep its result. I use this to "try out" a transaction, see if it would work (in my context, type-check the new state and discard it) without actually having any effect.

tl;dr: There isn't a finite set of useful monads you can bake into the language. There are infinite compositions of useful monads.



The point is to not (re)invent little languages all the time, but to use the actual language. For coroutines, see https://sites.google.com/site/gopatterns/concurrency/corouti....

The parser combinator is also interesting. A parser combinator is merely a function that takes an input pointer and produces a list of matches. One can implement this in Go (or any programming language I know of) without agonizing over the syntax used to chain combinators.

Essentially, the type machinery necessary to implement and use monads is a tradeoff. One can implement the code in pretty much any language one wants. The only point in contention is how many invariants are enforced by the type system vs. how much type system wrestling one has to deal with. I lack the empirical evidence that, for example, "monad that guarantees that my key/value store transactions cannot do any IO" is any better than a comment "// Transactions never do IO. To perform perform IO use package TIOBridge.". But I have a nagging feeling that adding logging or exceptions in this context is quite a bit more convoluted than "import log; log.info(...)" or "panic(...)".


Essentially, you're saying that the point is not to use DSL's. The benefits of DSL's are explained everywhere, so I don't think I need to repeat them here...

Parser combinators are not what you said they are -- and you won't be able to write the kinds of things you can in Haskell. e.g, to parse 10 comma-followed-by-int:

  replicateM 10 (parseComma >> parseInt)
How would you implement that in Go? Even if you violate DRY and re-implement replicateM in every monadic context I don't think Go will be able to encode anything similar in power to parser combinators.

> But I have a nagging feeling that adding logging or exceptions in this context is quite a bit more convoluted

To add exceptions, I use an EitherT transformation on my Transaction monad.

To do a debug log, you just insert an impure debug log as you would in a non-pure language.

To insert an actual production log you would use a Writer that accumulates the logs in order to eventually write them outside the transaction context. Otherwise, aborted transactions will also have the production logs. If you want these semantics, add logging capabilities to the transaction monad. Otherwise, you get nice guarantees about what can't happen.


But why the hell get into all that? We've got a job to do.

Your comment here is a great example of why people don't bother giving Haskell the time of day. I've already got business problems and performance problems, why give myself type system problems too? You're talking about adding on all these layers of complexity and abstraction, and the benefit is more "pureness". What do I care about pureness? I'm writing business code, or unix code, it's not going to be pure either way.

You'll claim that the type system makes all of the business problems just go away magically because your type system has reached a skynet level of self-awareness, but we both know you're gonna be debugging the same crap at the end of the day, except now you have 12 different monads, type constraints and a homegrown DSL in between you and the problem.

I'd prefer to work with a simpler environment, and it doesn't make me "too dumb to understand haskell". It just makes me "more productive than if I were working in haskell".


> why give myself type system problems too?

Exactly! Why use Go and have type system problems? Chase nil bugs in the middle of the night, when my type system could have caught them all for virtually no cost at all?

Why use Go and have type system problems like lack of sum types and pattern matching, having to waste my time emulating them with enumerated tags or clunky type switches?

What you're calling "layers of complexity and abstraction" are just "layers of abstraction" -- Haskell code to solve a problem tends to be simpler than Go code to solve the same problem. By simplicity, I'm talking about mathematical simplicity here. Not ease of learning. Simplicity is hard. But it pays.

I don't claim that the type system makes all problems go away magically, but it helps catch a far greater chunk of the errors.

> we both know you're gonna be debugging the same crap at the end of the day

Actually, no. If you've actually used Haskell, you'd know that debugging runtime problems is a much much more rare phenomenon. It happens, but it's pretty rare.

I don't ever debug null problems. I almost never have to debug any crashes whatsoever. I don't debug aliasing bugs. The vast majority of the sources of bugs in other languages do go away.

> I'd prefer to work with a simpler environment, and it doesn't make me "too dumb to understand haskell"

Who claimed you're "too dumb to understand Haskell"? If you're smart enough to write working Go programs, you're most likely smart enough to learn Haskell. But learning Haskell means learning a whole bunch of new useful techniques for reliable programming, and that isn't easy.

People who come to learn Haskell and expect it to be a new front on the same concepts they already know (e.g: like Go is) are surprised by how difficult it is -- because it isn't just a new front. There are a whole set of new concepts to learn. This set isn't really larger than the set of concepts you already know from imperative programming, but the overlap is small, and you forget just how involved what you already know is.


Hey, this is a pretty late response but regarding:

"I don't ever debug null problems. I almost never have to debug any crashes whatsoever. I don't debug aliasing bugs. The vast majority of the sources of bugs in other languages do go away."

I think this is the red herring at the heart of the problem. Those bugs really aren't a big deal, they happen rarely once you're proficient and they are quickly solved on the rare occasion when they do happen.

I'm talking about logic bugs, the kind that your compiler isn't going to find, or even that a "sufficiently smart compiler" couldn't find because it's a misunderstanding in the specification that you have to bring back to the product owner for clarification. Or bugs that occur when 2 different services on different machines are treating each other's invariants poorly. Those are the bugs I spend time on.

I haven't spent any time at all with Haskell, really, but it seems like a poor trade off to have to learn a bunch and engineer things in a way that's more difficult in order to prevent the easiest bugs.


> I think this is the red herring at the heart of the problem. Those bugs really aren't a big deal, they happen rarely once you're proficient and they are quickly solved on the rare occasion when they do happen.

This is simply not true. I don't only work in Haskell. I also work with many colleagues on C and on Python.

Virtually every bug in C or Python that we encounter, including ones we have to spend a significant amount of time debugging is a bug that cannot happen in the presence of Haskell's type system.

> I'm talking about logic bugs, the kind that your compiler isn't going to find, or even that a "sufficiently smart compiler" couldn't find because it's a misunderstanding in the specification that you have to bring back to the product owner for clarification. Or bugs that occur when 2 different services on different machines are treating each other's invariants poorly. Those are the bugs I spend time on.

If you had experience with advanced type systems -- your claims here would carry more weight. People who don't know advanced type systems tend to massively understate their assurance power. For example 2 different communicating services might use "session types" to verify that their protocol guarantees invariants. Or maybe the only type-checked programs are ones forced to reject invalid inputs that violate invariants.

> I haven't spent any time at all with Haskell, really, but it seems like a poor trade off to have to learn a bunch and engineer things in a way that's more difficult in order to prevent the easiest bugs.

They aren't the "easiest bugs" at all.

For example, consider implementing a Red Black Tree.

In Go, imagine you had a bug where you rotate the tree incorrectly, such that you end up with a tree of the wrong depth -- surely you would have considered this a "logic" bug. One of the harder bugs that you wouldn't expect to catch with a mere type system.

In Haskell, I can write this (lines 27-37):

https://github.com/yairchu/red-black-tree/blob/master/RedBla...

to specify my red black tree, with type-level enforcement of all of the invariants of the tree.

With just these 10 lines, I get a compile-time guarantee that the ~120 lines implementing the tree operations will never violate the tree's invariants.

This logic bug simply cannot happen.

Learning a bunch of techniques is a one-time investment. After that, you will build better software for the rest of your career. Debugging far fewer bugs for the rest of your career. How could you possibly reject this trade-off, unless you expect a very short programming career?


I meant less interesting logic bugs, like "Oh we never considered the intersection of these 3 different business use cases".

I could see a couple ways where the type system could be more powerful than unit tests, but only to the extent that your unit tests didn't cover some obvious cases to begin with. Why not just write unit tests?

As for how I could possibly reject the trade-off... I mean, nobody's gonna hire me to code Haskell and my side projects are too systemy and not lispy enough to even consider it.

Thanks for the code sample though, I plan on looking at this more later tonight and getting a feel for it (barely glanced just now).


> Oh we never considered the intersection of these 3 different business use cases

Take a look at the history of any repository near you, for a project that uses C, Python or Java.

Review bug fix commits. See how many of them relate to "business use cases" and how many relate to implementation bugs. I believe you'll find the latter is far more common.

Even in the "business use cases", enforced invariants will be a tramendous help. In the infinite possibilities of all the use cases, none will be able to break any invariant forced by the type system.

When you set out to prove a property of your program, you will end up finding bugs, almost regardless of what properties you are trying to prove.

Writing unit tests is also useful. But if I can have 10 lines in my Red Black Tree that mean I don't have to write any test whatsoever for the tree's invariants -- I saved myself from a whole lot of work writing and later maintaining tests with every change.

Generally, to get similar confidence levels from unit tests as you get from types, you'll need to write many more tests. If I had to choose whether to trust a well-typed system written in Agda (which is similar to Haskell but has an even more powerful type system) with only the most trivial testing done, or trust a highly tested system written in dynamic or a much weaker type system, I'd definitely trust Agda more.

Or if I were to trust my 10 lines of type code or hundreds of lines of tests for the invariants of the tree, of course the 10 lines of code are far more reliable and easier to maintain.


Honestly, in my case, for my day job, it's way, way more "business use case" or more frequently "misunderstanding between 2 services" than it is an implementation error. We catch 90% of implementation problems with unit testing and/or just plain sanity checking it before release. Maybe Haskell could help us by making unit testing easier/unnecessary but of course there's no switching at this point.

We're probably a bit of an edge case being a very service oriented architecture. (1,000 servers, 6-7 major classes of server, handling 10B (yes B) requests a day). Most of our bugs consist of a flawed assumption that crosses 3-4 service boundaries on it's way off the rails. I'll admit I'm ignorant of Haskell but I just don't see a type system fixing that for us.


Did you actually look at commits to reach this conclusion?

Also, do you commit only/amend after doing extensive testing? Or do you also commit the results of debug sessions as separate commits?

People have various biases that make them tend to remember some things and forget others. It is easy to have 100 boring implementation bugs, 1 interesting bug, and then end up remembering that you have more interesting bugs.

Also, can you give me a couple of examples of "flawed assumptions" across services?


Well, for example, yesterday I was doing some UI work on a project I'm totally unfamiliar with. My bugs were caused by bad SQL, a null pointer exception, and some JS silliness. Most of my time was sucked up in figuring out a requirement. The NPE took about 10 minutes out of my day, and the fix for it never made it into a commit message because it was, write some code, run it, oh shit forgot to initialize that, fixed.

Flawed assumptions across services tend to have to do with rate limiting, configuration mismatches, what happens when one class of service falls behind and queues fill up, stuff like that.


What kind of bad SQL? Most forms of bad SQL can be ruled out by a well typed DSL. JS silliness is also a type safety issue. You can use Fay, Elm, GHCJs or Roy to generate Javascript from a type-safe language.

Some NPE's take just 10 minutes (not negligible) but there are also some that are expensive.

Figuring out the reason is sometimes hard, when the code and its assumptions are badly documented.

Fixing NPE's is sometimes hard because of silly reasons such as touching third party or "frozen" code.

Also, NPE's can become extremely difficult when the code is bad in the first place.

Things like rate limiting and queue lengths can be encoded in types. You can use type tagging on connection sources/destinations to make sure you only hook up things that match rates/etc.


Man, just when I thought we might agree on something.

A DSL? SQL IS a freakin DSL. Why would I put another layer of abstraction between me and it? Just more places for things to go wrong.

http://en.wikipedia.org/wiki/Inner-platform_effect

Anyways that particular problem yesterday wasn't a compilation problem, it was due to my own misunderstanding of some pre-existing data.

You're proposing a more-complicated way of doing things with the idea that eventually we'll get to the promised land and things get simple again. I've just never seen it happen. Seen the opposite plenty of times.


SQL is a DSL indeed, but building SQL strings programmatically is an awful idea. You should build SQL queries structurally. The DSL to build SQL should basically be a mirror of the SQL DSL in the host language.

The DSL will guarantee that you cannot build malformed queries. It can also give guarantees you design it to give.

I'm not talking about replicating a database -- but just wrapping its API nicely in a type-safe manner.

I am proposing wrapping type-unsafe APIs with type-safe API's. This adds complexity in the sense of enlarging the implementation. But it also adds safety and guarantees to all user code.


Hmmm... not quite a flashback. If it were a flashback, you would have written "C" instead of "Haskell", and would have been thinking of assembly language rather than C or C++.

You have the same problems no matter what; the question is whether you want a tool that helps with them, or one that doesn't.


There is one construct for repetition in Go. It's called "for".

    parser = ParserNil;
    for i := 0; i < 10; i++ {
      parser = ParserSeq(parser, ParserSeq(parseComma, parseInt))
    }
PS. I'm a complete Go newbie. Don't take this code as the "one true Go way".


So you're forced to duplicate the monadic combinators (e.g: inlined replicateM here).

Now let's consider:

  myParser = do
    logDate <- date
    char ':'
    logTime <- time
    let fullTime = fromDateTime logDate logTime
    msg <-
      if fullTime < newLogFmtEpoch
      then do
        str <- parseString
        return (toLogMsg str)
      else do
        idx <- parseLogIndex
        getLogParser idx
Translating this to bare-bones Go would require using explicit continuations everywhere. Any combinator you use from Control.Monad is going to be duplicated/inlined in your Go code, violating DRY repeatedly.


> forced to duplicate the monadic combinators (e.g: inlined replicateM here).

??? This is a bare bones for loop. What exactly do you consider "duplication" ???

Regrettably, I can't accept the challenge because I have no idea what the code you pasted is doing and why.


A bare bones for loop is the implementation of replicateM. There are more complex combinators than replicateM, e.g: filterM, zipWithM, and more... Which you would need more than duplicating a bare bones loop at every occurence.

The code I pasted above uses "do" notation to write a monadic parser. The parser parses the format started by a <date>:<time> and if those set a date larger than some point in the past, it parses the continuation of the text differently. Which part of the code are you having difficulty understanding?


    > Essentially, you're saying that the point is not to use 
    > DSL's. The benefits of DSL's are explained everywhere, 
    > so I don't think I need to repeat them here...
DSLs aren't a gimme. In production code, produced and maintained by a team, and iterated through a lifetime of morphing business requirements, DSLs are almost always more of a hindrance than a help, because they impose an additional cognitive burden on their manipulators.

They're lovely to write, and elegant in a closed system, but in The Real World(tm) where we all live and work, we don't generally have the luxury of writing software to solve those classes of problems.


This is called the "Real World" fallacy. Your techniques are unfamiliar to me, and I am in the "Real World", therefore you are an academic who doesn't actually solve real world problems.

DSLs are used in production code, and solve real world problems better than ad-hoc repetative code does.


    > DSLs are used in production code, and solve real world 
    > problems better than ad-hoc repetative code does.
And this is called the "argument by assertion" fallacy.

I'm sure there are a lot of problems where DSLs make sense to use. They're simply not _most_ problems.


Whenever you're creating a function, you're defining a verb in your domain-specific implementation. Whenever you're creating a class / interface / prototype, you're defining a noun in your domain-specific implementation.

To me, the usage of the term itself ("DSL") does not make much sense. Using a combination of public or custom libraries and APIs is in itself a DSL and the combination makes it unique, per application. And when programming, you're extending that language all the time. That's what you're doing with every function, class or interface you add. That's what programming is - specifying to the computer how to do computations by building a language made of nouns and verbs that it can understand and then forming sentences out of those nouns and verbs. And these definitions transcend the actual lines of code, as when you're communicating to your colleagues, in speech or in writing, like in emails or specs, we do need precisely defined words to refer to concepts within your app.

The term DSL in the context of software-development is basically a pleonasm. And discussions on DSLs are actually stupid, as people argue about a non-issue.

The real discussion should be - in what contexts do you really need re-usability and/or composability and/or succinctness? Not always, I'll grant you that.

And here, I think we can learn from mathematics or physics, spanning domains so complex as to be intractable without defining mini-languages to express things efficiently. Speaking of Monads, many people described them in terms of mathematics, like with the infamous "a monad is a monoid in the category of endofunctors". You could say monads are just a design pattern, with some simple properties to grasp and some examples and normal people wouldn't need more to understand their usage, however understanding their mathematical underpinnings, that use big and unfamiliar words that scare us, allows one to grok the notion and build on top of it bigger and better abstractions. And abstractions help us to tackle even more complex problems. Yes, even in the real world.


    > Whenever you're creating a function, you're defining a 
    > verb in your domain-specific implementation. 
Yes. And by virtue of it being a function, i.e. a first-class operator in the language I'm working in, I also know _prima facie_ the semantics, cost, and implication of that verb.

This is critical and necessary knowledge. And it's precisely the knowledge that I _don't_ get (immediately) when I use a DSL. I have to know both the semantics of the verb within the context of the DSL, and the semantics of the DSL (as a whole!) in the context of my programming language.

That additional step is, more often than not, a significant burden. I'm disinclined to bear it, no matter how facially elegant it may make the solution.

    > The real discussion should be - in what contexts do you 
    > really need re-usability and/or composability and/or 
    > succinctness? Not always, I'll grant you that.
This is a disingenuous framing of the problem.


Ah, but internal DSLs do use the same constructs and semantics that the language provides, unless you're talking about macros.

And we aren't talking about macros here, but about monads (a quite reusable design pattern), possibly in combination with the do-notation from Haskell, or for-comprehensions from Scala, or LINQ from .NET ... basically a simple and standardized syntactic sugar to make operations on monads more pleasant to read, but not really required.

A monad is basically a container with certain functions that can operate on it that have certain properties. That's not a DSL. Those are just function calls on a freaking container implementing a design pattern.


I guess what you call an "internal DSL" I call an API.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: