I did not mean to claim that they do not. My claim is that "much of the functional..." So, referring to things like cons lists is easy, though that one is old. Look at the multitude of pointers in any of the purely functional data structures, though. There is a lot of overhead in building up these structures.
Look into the Scala (and clojure?) vector sometime. Literally a 32-ary tree. Makes the algorithm somewhat pleasant to look at, but are we really shocked when something using a simple array can beat it in performance?
Now, it is a perfectly fair and valid point that optimizing development time may be preferred. We are allowed contradictary valid points in situations that are trade off defined.
I think that no-one will argue that an imperative, transient data-structure will be faster than a functional, persistent data-structure if you use it imperatively.
If you use persistent data-structures the way they are meant to be used, they can and will be faster than a simple array.
Both have valid use-cases, but they are often distinct.
Even the one you are making is... tough. Persistent data-structures are not a sure fire win against mutable versions. Again I find myself refering to the DLX algorithm. The entire point of which is that it is easier and faster to permute through different configurations of a data structure then it is to have all instances instantiated in a persistent fashion.
Does this mean that persistent data structures are without use? Of course not! Again, you may not be optimizing speed of execution. Which is perfectly valid!!
Pssst, let the developers that don't open their eyes beyond C and C++ think their languages are the only ones with such features. :)
Every time I see a C or C++ talk discussing value types or arrays, I get to think those guys never saw other programming languages, e.g. Algol/Wirth family.
Another example of the Lady Gaga culture, instead of learning about the world of programming languages.