> As for how it affects the question of imperative vs. functional data structure performance, it should tend to favor the imperative ones. The issue is that purely functional data structures aren't allowed to just grab a block of memory all at once and insert data into it at leisure. (That would be a form of mutation.) Instead space has to be allocated as it is needed, in a piecemeal fashion. As a result, longer-lived and more active purely functional data structures will tend to scatter across the heap over time.
It's a bit more complicated than that. To talk about runtime (and memory) of an algorithm or a data structure, you need to have a machine model. If you look at what happens in, say, Haskell: We write our algorithms in a purely functional way, but we are interested in their runtime after compilation into native code. The compiler is free to do any transformation that preserves correctness, so if it's smart enough, it might grab a block of memory all at once. Or if it can prove that data won't be used again, it can safely change variables in place. (The language Clean even has type system support for that kind of mutation. (See https://en.wikipedia.org/wiki/Linear_type_system))
You might be able to analyse functional algorithms with a machine model that doesn't involve compilation to some form of imperative code. Perhaps by counting reductions in lambda calculus. But then it's hard to compare that to asymptotic performance of imperative algorithms.
Absolutely. But many of those hypothetical compiler transformations would only work if the code isn't really taking advantage some of the most important advantages of purely functional data structures. Persistence, for example.
That, and in a sense it's side-stepping the theoretical issue more than anything else. A compiler that's smart enough to find spots where a performance improvement can safely be gained by replacing a pure structure with a mutable one in the background doesn't really counter the idea that impure structures can be faster so much as acknowledge it.
Yes. And instead of comparing purely functional vs imperative languages, we could sidestep those issues by comparing persistent vs ephemeral data structures, regardless of language.
(For an example of the distinction--straight out of Okasaki's book: the standard purely-functional queue implementation as two linked lists has O(1) amortized runtime in adding and removing only if used `single-threaded' / ephemeral. As a persistent data structure you get a worst case of O(n).)
Sure. Honestly, that's how I had been interpreting it from the beginning. The SO question seems to be specifically talking about algorithms and not programming languages.
It's a bit more complicated than that. To talk about runtime (and memory) of an algorithm or a data structure, you need to have a machine model. If you look at what happens in, say, Haskell: We write our algorithms in a purely functional way, but we are interested in their runtime after compilation into native code. The compiler is free to do any transformation that preserves correctness, so if it's smart enough, it might grab a block of memory all at once. Or if it can prove that data won't be used again, it can safely change variables in place. (The language Clean even has type system support for that kind of mutation. (See https://en.wikipedia.org/wiki/Linear_type_system))
You might be able to analyse functional algorithms with a machine model that doesn't involve compilation to some form of imperative code. Perhaps by counting reductions in lambda calculus. But then it's hard to compare that to asymptotic performance of imperative algorithms.