Indeed they call for new names, as they encompass far more than iterators.
If you read a bit more about them, I think you will be surprised to see the breadth of what these abstractions can be used for. To start, they've been used to build a new compositional foundation for game theory[1], and they can be used to model gradient-based learning in machine learning[2].
As for their simpler applications as getters and setters, they are programmable in the sense that you can think of lens/prism types as interfaces that can be implemented arbitrarily. So you can create your own lenses and combine them like legos to construct new, "bigger" getters and setters from the smaller components.
This thread is about traversing a tree. At what point do we take a step back and think that iterating through a data structure and "building new compositional foundations for game theory" shouldn't be conflated together?
When does someone give up on the pagentry of programming and just get something done by looping through data instead of "constructing getters and setters to model gradient based machine learning".
It really seems like the straight forward way of doing things is to make simple iteration and worry about the game theory after that is all figured out.
I do get, and to some extent sympathize with, your position. But the comments to the article are, in large part, fundamentally addressing a higher level of abstraction than the more narrow scope of the article’s premise. As several commenters have referenced, traversing a tree in a ‘simple’ and narrow manner is addressed at the appropriate level of abstraction using already established mechanisms, particularly in Haskell using Traversable (a typeclass with a standardized implementation for trees). The discussion of Optics and Lenses are more of a side discussion about the syntax and implementations of a more broad set of data structures than merely trees.
I think your comment is valuable in spirit because it could bring a more thorough discussion of the creation or popularization of a syntactically clean and a shift of the idiomatic-ness of such a solution to traversing trees. Leaving the more abstract and general Optics discussion to be a side dish for commenters to discuss as well.
Also, just a nitpick, but traversing a tree is a broader topic than iteration. Iteration implies, both colloquially and formally, performing a computation on each element of a data structure, while traversal is a more general algorithm that could be the basis of iteration, but may also be more akin to a search through the data structure without expectation of performing computation until the searched for ‘member’ is returned from the traversal. So it’s not an apples-to-oranges comparison with your conflation of iteration and traversal, but does seem a little like Canis Familiaris-to-Mammal comparison.
I read your whole comment but I'm still not getting where the actual rubber meets the road. If I have a tree data structure what am I missing? What is it that I can't do that makes this extreme iteration complexity worthwhile? To me, having simple access to the data structure is enough because that's what I want a data structure for.
Traversable and lenses are very closely linked. If you go to the original paper leading to Traversable [1] and read through it, it feels basically identical to reading through the parts of the lens library that lay down the core abstractions and the laws implementations must follow if you want to be able to blindly manipulate them. In fact, the traverse function is a Traversal, and so fits trivially into the lens ecosystem.
Arent haskell traversables different in that they preserve the structure if you were to map over them, as compared to the solution posted in the article, where they get flattened to what amounts to a list?
Hi! A few years ago I found myself wanting an equivalent of `column` that didn’t strip color codes. After I implemented it in Haskell, I found it was useful to use Nix to force statically linking against libraries like gmp to reduce startup time. Perhaps what I ended up doing might be helpful for you too: https://github.com/bts/columnate/blob/master/default.nix
Thank you for the suggestion, I'll give this a whirl! I've fussed around with `--ghc-options '-optl-static -fPIC'` and the like in years past without success.
It's similar but not quite. IntMap in Haskell uses bit as the prefix unit with a span factor of 2. This uses nibble as the prefix unit with a span factor of 16. Also IntMap uses a bitmask for the range of prefix units in a node while this uses a subnet-mask style to get the prefix of a node.
I'm not sure what you mean by "necessarily must be persistent" here, but (under my interpretation which means only persistent implementations of this data structure are possible) that's not true.
The IntMap paper by Chris Okasaki acknowledges that the data structure is not a new one (it's a PATRICIA tree), but one that was around 30 years old at the date his paper was published and deserved to be better known. The functional/persistent implementation is what's novel about Okasaki's paper.
Edit: The data structure this submission is about looks like great work! Excited to see it and will probably attempt my own ports of it to other languages.
Sorry I wasn't clear. What I meant is that an idiomatic Haskell data structure needs to be persistent: an insertion needs to produce a new version of the data structure with the inserted element while keeping the original. So almost all Haskell data structures need to satisfy this requirement otherwise using it is a lot of PITA, even though the ST monad makes mutations fairly easy. However the original submission is not persistent, and so they aren't comparable.
You should look into the "comonad" abstraction from the functional programming world. Dual to monads, they're a natural fit for situations where you might have a value with some sort of (possibly infinite) context (think: neighborhood, or history, etc.) that can be either pre-computed or computed on-demand.
This StackOverflow post[1] is a good starting point for understanding comonads. It points out that they can be used to model cellular automata (like Conway's Game of Life), sequences, streams, etc.
In the Haskell world, folks have solutions to both of these problems:
The "large-object problem" can be tackled in a principled fashion using strict state threads (aka the ST monad: https://wiki.haskell.org/Monad/ST) or using vanilla mutable (IORef) references.
The "parent-child problem" is well-addressed by lenses, also known as functional references. They are basically composable getters and setters that allow you to read or update deep into nested structures.
I don't fully understand how parent-child problem is addressed by lenses. How does lens helps to address the problem that a reference update is akin to changes an external state?
My understanding is that lens helps to address the large-object problem.
You mean, Haskell developers ditch the functional idiom and use a more imperative one when they need it for performance? And yes, they do, and it works nicely. But it's not solving any problem with the functional paradigm.
Anyway, none of them is a problem with the semantics of FP. They are a problem of insufficient optimization. So they will probably get solved at some point.
ST is not 'imperative'. The monadic interface can look imperative sure, but then you can also use Applicative and friends and that's back to being functional. Of course, I prefer to think of what you call 'imperative' programming as function composition where >>= is the reverse of the 'normal' composition operator. Indeed, sequential imperative commands and function composition are isomorphic.
Hum... You declare an state and go modifying it with an action after another. Looks quite imperative to me. The fact that it returns an interpreter instead of values is meaningless if it doesn't change the way you think about the program.
If you personally prefers to read it as the declaration of a program that given an state modifies it step by step, you can claim it's functional. But you are alone on that, the language even has specialized syntax to push people into reading the former, and anything minimally complex all but requires it.
travisathougies is not alone on that. Every one of us functional programmers worth their salt will consider a sequence of transformations from one state to the next as a functional program, as long as there are no side effects outside the current function that may transform state in ways that aren't declared in the types of the input and return values (which would break the referential transparency).
Functional Reactive programming (i.e. combining functions over streams of mutating objects) is essentially that, and it's a pure functional paradigm, quite popular in front-end web development (and I hear it's even creeping its way into the back-end).
Here is an STM function: orElse :: STM a -> STM a -> STM a
It returns the first argument if there isn't any synchronicity problem, or the second if some other execution line conflicts with the first.
What exactly do you mean by "same input -> same output" and how exactly does a programmer thinks about that function on this way when composing it into an STM interpreter?
I think you should give the guy above the benefit of the doubt and restrict yourself to ST. IO and STM are different beasts, and still they're not imperative. The issue with IO and STM is global state. Mutation of global state can be done imperatively, as many imperative languages do. It can also be done functionally using monads to hide the affine types in a nice wrapper class in the absence of proper affine types or using proper affine type systems. Haskell choose the former. Other languages, like clean, choose the latter.
Same input -> same output is a nice property of referential integrity, which is a common thing we have in functional languages, but it is actually not due to the functional nature. Imperative languages can trivially be made to have this property by disallowing mutation of variables not in the immediate scope and restrict IO side effects (and maybe a few other straightforwards restrictions depending on the language in question).
With that out of the way. Same input -> same output holds for `orElse :: STM a -> STM a -> STM a`. However, you really have to expand out the types.
So, `orElse` should behave the same as a function with this type:
> (State# RealWorld -> (# State# RealWorld, a #)) -> (State# RealWorld -> (# State# RealWorld, a #)) -> State# RealWorld -> (# State# RealWorld, a #)
Now, the definition of 'same input -> same output' is clear. Assuming the global state (State# RealWorld) is the same, then the output state will be the same. Actually, this is not a property of all languages, as most languages allow arbitrary side effects in any code. Haskell does not. This is the key property that makes STM easy in Haskell and difficult in other languages. Note that the 'State# RealWorld' is always handled linearly, so there is no possible way the same 'value' could be passed in twice. Again, hiding this behind a monad enforces that this value is handled linearly, since Haskell doesn't have linear types (or at least didn't when STM, ST, and IO were written).
I was referring to the ST monad which bts and travisathougies were talking about, not STM.
I mean that when you run ST code twice with the same input, you will get the same output. E.g., if I write (runST f) + (runST f), that's the same as 2 * (runST f), etc.
As for thinking, it means I don't use step-through debuggers ever, because I don't need the whole program to be in a certain state to watch what happens. I just grab the function, grab the input, and run it in isolation.
Thanks for the note, but I'm afraid not. At this point, it's taking a lot of time for the data entry each year, plus conferences are now almost all awarding 3-6 best papers rather than 1, so the annual work has expanded considerably. Also it's a lot harder now to find the older awardees as various websites have disappeared, and former PC chairs often have forgotten the outcome.
If you read a bit more about them, I think you will be surprised to see the breadth of what these abstractions can be used for. To start, they've been used to build a new compositional foundation for game theory[1], and they can be used to model gradient-based learning in machine learning[2].
As for their simpler applications as getters and setters, they are programmable in the sense that you can think of lens/prism types as interfaces that can be implemented arbitrarily. So you can create your own lenses and combine them like legos to construct new, "bigger" getters and setters from the smaller components.
[1] https://arxiv.org/pdf/1603.04641 [2] https://arxiv.org/abs/2103.01931
EDIT: fixed typo
reply