Funny, but HN oldsters probably remember when functional programming wasn't at all tied to immutable data structures, and when Lisp and derivatives where considered enough functional programming, without every discussion of the topic requiring strictly requiring purity and immutability -- just first class functions, map, fold, and the like.
I was taught in university that functional programming means using functions as first-class-values, so function can get other function as a parameter. That's all. You can write functional code with assembler if you want. Immutability is just another design choice with its pros and cons.
I've always thought that functional programming is about structuring your program in functions and their composition (as opposed to classes) - nothing to do with immutability either.
As for JS, I've read "Javascript: The good parts" one day and realized that I can do everything I want using just functions and closures - it's really elegant imo. (even use this approach in Python sometimes)
The immutability comes in with the definition of 'function', which, in some circles, means something more specific than an arbitrary unit of code that is callable and which may or may not return a value. While neither 'function' nor 'functional programming' can be used in computing with any precision unless they are qualified, there is a distinct paradigm in which functions have no side-effects and are the only mechanism of elaboration.
Mathematically a Function is a Relation that has two special properties: uniqueness and existence.
(these properties names can be wrong as I've not study calculus in English)
So on the machine, Immutability plays an important role on the first piece, uniqueness, so it tries to guarantee that results are not just "equal objects, but different instances", which breaks the function definition.
Of course that functions that return a new object for the same arguments, even if it's immutable, are not true math functions.
Mathematical functions aren't too great a metaphor for computer functions. They share a lot of similarities—domain, range, inputs, outputs, composability. But mathematical functions just are, there's no notion of changing things—however, the most common purpose of computer software is to change things. So it breaks down, or you come up with some way of defining a change without anything changing, which adds a layer of abstraction to everything you try and do.
I think it is correct to say that, in the functional paradigm, you effect change by making something new (and forgetting some older things.) Regardless of the paradigm (other than self-modifying code, perhaps) programs just are, but they must be evaluated if they are to do something useful.
I will leave that last sentence as I wrote it, but it immediately leads me to wonder if functions that write functions have wandered into the self-modifying code realm?
I really agree with you. Think that there's a tension between how a computer works and how a functions operate on data.
Immutability and Referential Transparency help reduce that friction, but in the end we still have not found a real solution. That's why languages like Haskell require a bunch of gymnastics just to allow IO to be seen as pure functions.
Functional programming is also a family resemblance thing: you can't necessarily give a strict definition, any more than you can define "blues music".
I'd say functional programming is a tradition that draws from sources like denotational semantics, lambda calculus, Church, Landin, etc etc.
There are some different subspecies of functional programming: the Lisp family that draws from AI engineering, MIT, Emacs, actor research via Scheme, etc; the ML family that draws from typed lambda calculus, inductive definitions, and logical proof systems; and more, like concatenative languages.
JavaScript was always inspired by functional programming, as evident by Eich's claim to have tried to sneak in a variant of Scheme dressed up as Java. I always use lots of anonymous functions, higher order functions, and nonmutating transformations in my JS, without using any special immutability libraries, and I'm pretty happy with it.
FP is a --in my humble opinion-- broader topic describes a certain programming style. HOF is one thing in there, but so are "function+data rather then classes" and "favor immutability".
In contrast the imperative programming style is not merely about "not having HOF" or "mutable variables all over the place".
Yeah, and---literally, not to be snarky---times change.
FP isn't a technical definition. It's a cultural, family-resemblance definition that's driven by some mixture of fashion and genuine exploration into a niche of ideas in computing and formal languages.
The best you could hope for would be a set of shared values of the FP culture and then justifications for why various things that call themselves FP feel that are upholding those values. Even with a definition like this though, the values will shift and waver over time.
Here's my FP values:
* Simple over easy.
Rich Hickey's old egg. Favor abstractions with few moving parts,
reduced interactions, and more parsimonious overall models over
ones driven by metaphor or target use case alone. Result is
tools which are perhaps harder to get started using but have
better complexity scaling properties.
* Mathematics has been doing this a long time.
PLs are just formal languages and mathematicians have been working
seriously on formal languages for at least 150 years: steal their
ideas. From this we get ideas around comparative linguistics,
semantics, various proof/reasoning mechanisms and types.
* Readability counts, but not just in the sense of syntax.
Really this means "legibility" or even "static legibility". It's
another hint toward types, immutability, and general state space
reduction. It's a strong push away from "emergent behavior" to the
greatest degree possible. Things should endeavor to do what they
say on the tin... and to to the greatest degree reasonable say things
on the tin in such a way that the information is available from the get-go.
* Tradeoffs between power-to-construct and power-to-analyze.
This is everywhere in formal languages/PL, but it's especially strong
in FP since there's a focus on (simple over easy) semantics. It opens
the door for there to be mathematical semantics as opposed to just
operational semantics and this makes for rich opportunities for on-the-tin
static reasoning (equational reasoning, changing interpreters, embedded DSLs).
I think these values more or less were in place back in the old Lisp-y FP days, but they really are taken to new places by "more modern" takes on FP.
I think the reason is rise in CPU power - immutability has a processing cost, but it makes things easier for programmers. This is a general trend in programming languages over decades - higher abstraction (easier for programmers) but harder for compilers to optimize.
For example, Common Lisp has two versions of list functions that modify list - one that returns a copy and another that returns mutated original list. (By the way, Paul Graham in his On Lisp, which is from 1993, mentions that functional programming style means that function doesn't modify its arguments and rather returns a copy.) Because once upon a time, the difference actually mattered. Today, mostly it doesn't.
Common Lisp has all this stuff because it comes from a tradition of writing efficient code in Lisp itself, where code can be anything from a file system, network stack, theorem prover, graphics toolkit, to itself ... It is already its own implementation language where a programmer might want to control some aspects of memory allocation.
Other languages delegate this stuff to the implementation language/runtime, for example Java/JVM.
The book "Functional Programming" by Field and Harrison (1988) was already very much into referential transparency as the panacea for all our programming woes:-)
Agreed--being able to define functions for control flow & data transformation seemed functional enough (compared to having to rely on primitive reserved words).
Of course, even then, C was functional by that definition (as long as you could remember where the parens & asterisk went when defining your function!)...so what do I know.
On the other hand, if you have the attention span and interest, Li Haoyi has an interesting (and, in its conclusion, useful) take on this topic:
Which is neither here nor there regarding my original observation, which didn't deny that "words shift in meaning" but called to attention (or to memory) a particular shift in meaning.
That said, and being a pedant, I must say that while English (or any language) does indeed change, it doesn't do so by any kind of consensus. People don't stop and give consent to a change in meanings, they roll along with it. There are some mechanisms in this shifting of meanings (needs to describe new concepts, fashion, immigration, etc) but consensus is not one of them.