So really monads are essentially an abbreviated syntax? Chaining functions, where (some of) the arguments are hidden from the language syntax and inserted by the compiler?
Are monads also linear types? I mean, if I have a monad which represents system IO state, each value of system state can only be consumed once in actual execution, I can’t split the IO state in two and print different outputs on each branch. But I can do that in a conditional expression because only one branch is ever actually realised. For IO state, data branching is only allowed when guarded by control branching.
> So really monads are essentially an abbreviated syntax? Chaining functions, where (some of) the arguments are hidden from the language syntax and inserted by the compiler?
I recently wrote a note related to this that had a useful exchange:
>> When you see a "do" block, where do you look to figure out what it's actually doing?
> If you need to know, you look at what consumes the result. If it's abstract (say it's a top level binding `... -> m a` where the m is abstract) *you don't _need_ to know "what it is actually doing" - it should be correct for any choice of* `m`.
The behavior, and the "laws" or perhaps general far-reaching properties, are the more important part of Monads I think.
and then a bunch of functions that let you pretend it's the value X, that's what the compiler is abbreviating for you, all the necessary wrapping/unwrapping or calling members. In the prolog example what we care about is the last/latest value and monads let us just shove the previous value(s) into a hole and forget about it, or at least ignore it until later.
with error monads we don't always care if something failed (like a write) but we probably want to keep that information for later, so it just gets hidden and we can pretend we're just passing around a value (see https://fsharpforfunandprofit.com/rop/ )
I'm not particularly clear on how Haskell does the IO monad, but basically the hidden value is what lets the compiler keep track of what was done pre/post IO, allowing you to thread-the-whole-world through an IO call.
"Do" uses the same trick as that prolog example, by threading previous calculations through a function call you can force
Do
Foo1
Foo2
Foo3
to be called in that order (important for a lazy language) so you don't have force it by nesting them Foo3 (Foo2(Foo1(X))) can get pretty ugly fast.
How does that relate to “pipe operator” as in https://github.com/tc39/proposal-pipeline-operator
Are monads also linear types? I mean, if I have a monad which represents system IO state, each value of system state can only be consumed once in actual execution, I can’t split the IO state in two and print different outputs on each branch. But I can do that in a conditional expression because only one branch is ever actually realised. For IO state, data branching is only allowed when guarded by control branching.