Hacker News new | past | comments | ask | show | jobs | submit login

I would classify this as a pun, not an explanation.

That said, I prefer a presentation using `times` and `join` rather than `ap` and `bind`:

   map   :: (a -> b) -> (f a -> f b)
   times :: (f a, f b) -> f (a, b)
   join  :: f (f a) -> f a
Reason being, `ap` and `bind` are just `times` and `join` composed with `map`; it's easier (for me) to think about what `ap` and `bind` bring to the table separately from the behavior of `map`.

`times` lets you take a pair of sources and turn it into a source of pairs. This lets you build pipelines that flow together from multiple sources into a single result; with just `map` you can only assemble a unary pipeline with one source.

`join` lets you take a source of sources and meld the layered structures together into a single source. This lets you create pipelines whose contents dynamically extend the pipeline. Without `join`, you can only construct pipelines whose stages are fixed up-front.

(As a bonus, the presentation with `times` leads to the idea that an "applicative functor" is a monoidal functor. If you don't know what a monoid is, maybe it's a toss-up in terminology, but the term "monoid" is vastly more common than the term "applicative".)




Thank you for that, the explanation about `times` makes a lot of sense. However I can't really understand your explanation for `join`. Do you have any example or longer explanation about "pipelines whose contents dynamically extend the pipeline" and how they are different from "pipelines whose stages are fixed up-front"?


Are you familiar with promises, particularly in the JavaScript ecosystem? A Promise<A> represents a pending interaction; you'll get an A when the interaction has finished, but you don't have an A now. But even while you don't have the A, you can prepare a pipeline now that will process it later.

Promise<_> is a functor because we can write a function `map : (Promise<A>, A -> B) -> Promise<B>` that takes a pipeline stage to run, and promises to run it on the A when it arrives. That stage produces a B, but we don't have it yet, either (since it needs that A). Just having a `Promise<B>` doesn't tell us how many pipeline stages it's built -- the fact that we called `map` isn't evidenced in the type itself -- but we know that there's a fixed number, and that all of the stages have already been given to it. After all, we call `map` in the current timestep; the stages are known long before they get to run.

Promise<_> is a monoidal functor because we can write a function `times : (Promise<A>, Promise<B>) -> Promise<(A, B)>` (but see note^). We don't necessarily have either the A or the B, and the interactions that produce them may not complete at the same time; but eventually we will have both of them, so we can wait until then and pair them together. Because we can wait for two pending interactions at once, we can wait for any number of them at once; think about how to write a function `[Promise<A>] -> Promise<[A]>` using `times`.

With `times`, we're just combining all of the stages in the existing promises together. Unlike `map`, which applies stages sequentially, `times` applies them concurrently -- like a branching series of pipes with Y-bends where `times` is used. With this, a `Promise<A>` still has a fixed number of stages known up-front -- all stages are still provided through `map,` which we still call before we ever have an actual `A` to run them with -- but we also now have a fixed number of forks/branches, where before it was a straight-line series of stages.

Promise<_> is a monad because we can write a function `join : Promise<Promise<A>> -> Promise<A>`. The nested promise means that we need to wait for one pending interaction, and then wait for another pending interaction, before we can get an A. This often happens because we interact sequentially with a backend service; we need to log in before we query the system, so if we try to put together pipeline stages like `(Username, Password) -> Promise<Session>` and `Session -> Promise<SensitiveInfo>`, we'll end up with a `Promise<Promise<SensitiveInfo>>`. This type only lets us make up to two interactions; if we can't get `SensitiveInfo` within two interactions, we simply can't write the program at all.

The `join` function lets us collapse two interactions within a single Promise; and just like `times`, if we can collapse two interactions, we can collapse any number of them. With `join`, a type like `Promise<SensitiveInfo>` may take an arbitrary number of interactions before we finally get the result. However, this is a lot more powerful than you might expect: those inner promises don't exist yet! They will exist, as each interaction completes; but now a single `Promise<A>` includes both the stages we've set now, and any stages that future interactions may generate. We can no longer tell how many stages it will take until the final `A` is computed, nor how many interactions it will take. We'll only know when it actually happens.

(^) Well, we also need a function `pure : A -> Promise<A>`, which lets us pretend we don't have an `A` yet. Or, put differently, it puts an A behind a "no-op" interaction that completes immediately and makes the A available to any downstream stages as soon as it's needed. Something like an echo server.


Thanks a lot again, that was a great explanation. I see what you mean now by "pipelines whose contents dynamically extend the pipeline".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: