The section on "Mapping Category to Programming" seems to totally miss the point of how category theory is actually relevant in programming. Functors, applicatives, and monads are category theory concepts that are particularly useful in programming, but they aren't even mentioned here.
Category theory is way more abstract than programming, so
the truth is that you don't need to know what these concepts mean in category theory; just understand them as programming concepts to start with. For example, monoids are ubiquitous in programming, but the category theory view of a monoid as a one-object category is too abstract to be helpful. Instead, it's much more useful to just think of a monoid as a set equipped with an associative binary operation and an identity element. Then you can easily see that strings, list, and numbers (with their applicable operations) are all monoids.
> the category theory view of a monoid as a one-object category is too abstract to be helpful
That’s not true at all! It goes to the essence of what it means for a thing to be foldable unto itself, ie.: the things that you can reduce() or fold().
There’s an entire paradigm of data processing where all you do is map-and-reduce, so there’s something very important and practical there.
Sure, but my point is just that the category theory view of a monoid is harder for most people to understand.
For example, it's easy to see that integer addition is an operation that combines two elements of the set "Integer" into another element of the same set. E.g. 1 + 2 = 3. From there, it's a small step to seeing the same pattern elsewhere, and recognizing it as the "monoid" pattern. E.g. "foo" + "bar" = "foobar" for string concatenation.
In category theory, one instead has to think of this as the composition of two morphisms being equivalent to another morphism. E.g. +2 ∘ +1 = +3. That is much less intuitive for most people, I think.
I have nothing against category theory. It is very powerful precisely because it is so abstract. But if you want to learn how to apply the relevant concepts to programming, that abstraction is often more of an obstacle than a help.
Category theory is way more abstract than programming, so the truth is that you don't need to know what these concepts mean in category theory; just understand them as programming concepts to start with. For example, monoids are ubiquitous in programming, but the category theory view of a monoid as a one-object category is too abstract to be helpful. Instead, it's much more useful to just think of a monoid as a set equipped with an associative binary operation and an identity element. Then you can easily see that strings, list, and numbers (with their applicable operations) are all monoids.