To me, the fundamental issue with pure declarative or functional languages is that they ignore the simple fact that both the problem and solution domains of the set of all possible problems are, fundamentally, heterogenous.
Sometimes, "what" I want is a program to do this that and the other thing in this given order. ie the "how". Other times, I don't care, I just want these properties to hold. Yet other times, I have no idea what I want at all and have to experiment and see what happens.
I want a language that solves the composition problem between these distinct solution spaces. The best programming language for any given task is the one who's world view best matches the preconceived spec in the programmer's head.
Most research languages take a key idea (everything is a list! or no side effects! or something like that) to a logical extreme. That's a great way to study a set of phenomenon in a particular little universe, but most practical languages find a happy median of thought-pure and just-fucking-works. We need to get some of these wins from high concept languages back into just-fucking-works languages.
I think you, like many others, don't understand the practical ramifications of a completely pure language. Let's take Haskell as an example--it is, after all, the poster-child of purely functional research languages!
And yet, from a practical standpoint, Haskell is not pure. The underlying abstractions are pure, sure, but the language makes working with impure computation feel just like writing an impure program. The magic of Monads and do-notation may sound complex, but in reality it's just a neat way to write impure code in a pure way (sounds like a paradox, but it isn't).
Look at this snippet:
main = do name <- getLine
putStr $ "Your name is " ++ name
This trivial program is technically purely functional. And yet it is also imperative from the programmer's point of view! It looks just like something you might write in Python with slightly different syntax.
So you say, what is the advantage to writing a program this way rather than using an impure language? The answer is simple: Haskell lets you mark impure code using the type system. This is similar to the Scheme/Ruby convention of 'do!' functions being unsafe, but actually enforced.
This sequestering helps you avoid bugs by not having implicit mutation and IO everywhere and it helps the compiler do clever optimizations like running your code in a different order. And yet, if you need it, you have IO and State and fancy things like STM right there, with only a little bit of complication.
Of course, learning to think in this admittedly roundabout way is tricky. But learning is a one-time cost; using a less expressive or harder-to-maintain language is a recurring cost.
In other words, there is no reason why a language focusing on one idea can't be a "just-fucking-works" language as well. In my experience, Haskell and Lisp are just as practical as others; the only difference is in the initial learning period. Look at Common Lisp: you can't get more "just-fucking-work"ing than that!
I think that one should usually ignore a one-time cost like learning in favor of recurring benefits, but others naturally disagree.
The short, few-liner Haskell version is beautiful. It's also not the same algorithm. So then you whip out the larger "direct translation" version and, suddenly, you're wishing for C.
This story repeats itself over and over again. For example, try implementing the Fisher-Yates shuffle.
Now, in practice, you don't need in-place swaps. And you can argue code style and YAGNI and pre-mature optimization and practical implications and day to day use and parallelization and whatever etc. Yada yada yada.
I'm not saying there is anything wrong with Haskell. I'm saying that foundation of Computer Science lies in the study of data structures, algorithms, and computability.
Software Engineering basically boils down to a search and optimization problem: Find the data structures and algorithms which (1) minimize the weighted average of costs and (2) maximize the value that the solution generates.
Haskell's approach to Software Engineering presupposes the cost savings to isolating mutation (and other purity concerns) as a requirement. I'm simply advocating future language developers consciously address the problem of finding those optimal data structures and algorithms in minimal time, with an eye for the fact that data structures and algorithms are a fundamental law of computation.
I think Haskell can be a great vehicle for all our current day imperative programming needs (even if due to some library issues it's not quite as nice for all tasks yet).
Sometimes, "what" I want is a program to do this that and the other thing in this given order. ie the "how". Other times, I don't care, I just want these properties to hold. Yet other times, I have no idea what I want at all and have to experiment and see what happens.
I want a language that solves the composition problem between these distinct solution spaces. The best programming language for any given task is the one who's world view best matches the preconceived spec in the programmer's head.
Most research languages take a key idea (everything is a list! or no side effects! or something like that) to a logical extreme. That's a great way to study a set of phenomenon in a particular little universe, but most practical languages find a happy median of thought-pure and just-fucking-works. We need to get some of these wins from high concept languages back into just-fucking-works languages.