Hacker News new | past | comments | ask | show | jobs | submit login

There is an interesting trend where things with a strong mathematical definition tend to have the advantage. "Functional Programming" doesn't have a precise definition at all as far as I know, so it is likely it doesn't refer to a real thing. Most of the paradigms are similar as they don't seem to actually mean anything. People seem to want to describe something (in today's article, data flow) but they don't quite have the language to do it.

If you go to https://en.wikipedia.org/wiki/Functional_programming right now you will see "functional programming is a programming paradigm where programs are constructed by applying and composing functions" which is a bit of a ... it is quite hard to program without doing that. Even SQL is basically applying and composing functions. There are languages that are branded together because they have similar communities or techniques, but the brand isn't describing something beyond people thinking that things belong in a basket together.




> Functional Programming” doesn't have a precise definition at all

It may be that different people mean different things when they talk about FP. I come from the Haskell corner and from my point of view that's a completely insane claim, at least when it comes to pure FP.

[ ] Are you writing a procedure that just processes its arguments into a return value? [ ] Does the procedure also operate on a non-static context? [ ] Does the procedure generate additional side effects?

If you answer yes|no|no, you are programming purely functionally.

The formal litmus test is the question of whether your procedure actually fulfills the definition of a mathematical function with regard to the relationship between the arguments and the return value. If it also does not generate any side effects, it is a purely functional procedure!

Then you can apply the principles of mathematical composition of functions to the construction of programs. Purely functional languages force you to program this way by default and to use special constructs for everything that has to do with side effects: IO monad, algebraic effects, The Elm Architecture.

How are you supposed to program like this? This is where (in addition to the concepts mentioned above) the higher-order functions that everyone knows come into play: map, fold, filter, flatMap, ... If you only know these and their use in impure languages, as if this were nothing formally thought through, but just a programming style for hipsters, then I can understand how the impression arises that there are no precise definitions here.

In practice, the concepts are often very distorted or not understood at all. For many programmers, OOP mainly seems to mean that you get the drop-down list with completions in Rider when you press “.”.


I also come from the Haskell corner and I agree with roenxi. I don't think that FP is a well-defined concept, nor do I understand your characterization of pure FP. Let's take a simple program

    main = do
      v <- newIORef 0

      let procedure = do
          n <- readIORef v
          print n
          modifyIORef v (+ 1)

      ...

and have a look at the conditions you laid out

* Are you writing a procedure that just processes its arguments into a return value?

  No, it has no arguments and its return value is just (), but it does more besides.
* Does the procedure also operate on a non-static context?

  Yes, it operates v
* Does the procedure generate additional side effects?

  Yes, it prints to the terminal.
So I have answered no | yes | yes, the exact opposite of what I should have to be classed as doing pure functional programming.

You might say "Ah, but the return value of `procedure` is not (), it's IO ()", but I don't see that that changes anything. I can write my entire program in IO if I want, in a way that's hard to distinguish from having written it in, say, Python. Is that not pure functional programming, despite the fact that it's being carried out in Haskell? Then you might say "Ah, but the difference is that you could have written your program purely, without IO". But again I fail to see how that differs from Python. I can write programs that don't do any IO in Python too.

So what is it that makes Haskell a pure functional language and Python not?

My response to all this is that the notion of "pure" is very unhelpful and the correct way to describe this property that Haskell has is "referential transparency", that is

    let x = rhs in ... x ... x ...
has the same result as

    ... rhs ... rhs ...
regardless of how many times (or zero) x occurs.


main is never pure. Your `procedure` is also not pure (and also not referentially transparent). You can write impure code in Haskell, which your example demonstrates very well. But then it is necessarily IO code. Non-IO code, on the other hand, is always pure (and referentially transparent).

> but I don't see that that changes anything

It changes everything, but I can't explain it any better than I did above.

> So what is it that makes Haskell a pure functional language and Python not?

If you give me a Haskell procedure `f1 :: Int -> Int` without showing me the content, I still know that it is referentially transparent and pure. If f1 evaluates x to y once, I know that it ALWAYS evaluates x to y. If a main program in which f1 is used generates a screen output, for example, I can safely exclude f1 as the source of this side effect. If you give me a function `f2 :: Int -> IO Int` instead, I can't draw all these conclusions. In Python, these conclusions would be invalid from the start.

Now you can say, “So what? Who cares?” But it helps me enormously when designing and structuring programs, understanding programs, localizing program behavior, etc. If you don't see any added value in this distinction (ensured by the compiler), we don't have to argue about it. But I simply cannot agree with you that this does not represent a conceptual and formally justified difference between Haskell and Python.

Of course you can program purely functionally in Python. Just as you can program in a dynamically typed language as if you were dealing with static data types. Or in Java without throwing exceptions and null references around everywhere. Or in C without segmentation faults.

These are not normative or moral comparisons, but other examples of the fact that you can of course program in such a way that the code has certain properties, even if the compiler does not secure these properties. The question is whether you want that.


> main is never pure. Your `procedure` is also not pure

Ah, it's possible I misunderstood you. I missed that you said "Purely functional languages force you to program this way by default". I mistakenly assumed you were saying that in a purely functional language you can only program purely.

So you are saying that Haskell is a pure functional language, but you can also write non-pure things in it? So being a "pure language" is more a case of what sort of style is encouraged rather than what sort of style is possible?

> (and also not referentially transparent)

Interesting. Could you explain what you mean by that? (The definition of referential transparency that I know doesn't apply to functions but to languages.)

> You can write impure code in Haskell, which your example demonstrates very well. But then it is necessarily IO code. Non-IO code, on the other hand, is always pure (and referentially transparent).

That's interesting. How do you distinguish these two pieces of code:

    f = do
      v <- newIORef 0
      modifyIORef v (+1)
      readIORef v

    g = do
      v <- newSTRef 0
      modifySTRef v (+1)
      readSTRef v
is the former impure and the latter pure? Or are both impure?

> So what is it that makes Haskell a pure functional language and Python not?

If you give me a Haskell procedure `f1 :: Int -> Int` without showing me the content, I still know that it is referentially transparent and pure ... In Python, these conclusions would be invalid from the start.

Do you mean because of the type? If not, I don't understand what distinction you're drawing, and what makes the conclusions invalid for Python. If so, then does a pure language have to be typed? Would it be impossible for a pure language to be untyped?

> Now you can say, “So what? Who cares?” But it helps me enormously when designing and structuring programs, understanding programs, localizing program behavior, etc. If you don't see any added value in this distinction (ensured by the compiler), we don't have to argue about it.

We don't have to argue about it. I program in Haskell every day and reap the benefits :)

> But I simply cannot agree with you that this does not represent a conceptual and formally justified difference between Haskell and Python.

I agree there's a conceptual and formally justified difference between Haskell and Python. I just don't agree with you about how to characterize it :)

To reiterate my characterization, it is that Haskell has "referential transparency", that is

    let x = rhs in ... x ... x ...
has the same result as

    ... rhs ... rhs ...
regardless of how many times (or zero) x occurs. That's it! Nothing to do with "purity", "IO", "processing arguments", "non-static contexts" or "side effects". Now I may be wrong. That's why I'm trying to understand what you think the characterization is.

But so far I have never discovered a benefit of programming "purely" in Haskell that is not ultimately a consequence of what I call "referential transparency" above. Do you know one?


This will be a bit more detailed. I apologize in advance.

> So being a "pure language" is more a case of what sort of style is encouraged rather than what sort of style is possible?

The crucial point is that the (potentially) impure code is clearly and explicitly differentiated from pure code (IO code vs. non-IO code. Haskell uses the type system to draw this distinction.

>> (and also not referentially transparent)

> Could you explain what you mean by that? (The definition of referential transparency that I know doesn't apply to functions but to languages.)

Functions are expressions in Haskell (and in Python). Applications of functions to their arguments are also expressions. I use "procedure" as a synonym for "function".

As I understand it, referential transparency is a property of expressions. But if we understand a language as the total set of expressions that result from its alphabet and its production rules, then we can say that a language is referentially transparent if and only if all its expressions are referentially transparent. Consequently, the views are compatible.

An expression is referentially transparent if it can be replaced by its value (what the expression evaluates to) without changing the behavior of the program.

This implies that if side effects are emitted during the evaluation of an expression, then the expression cannot be be referentially transparent, because these side effects would disappear if the expression is simply replaced by its value in the program. If a dynamic (i.e. changeable) context is also processed during the evaluation of an expression, then the expression cannot be guaranteed to be referentially transparent over time either.

A function (i.e. procedure) is pure if it (1.) always returns the same return value for the same arguments and (2.) does not produce any side effects. Note that the definition is also compatible with functions that do not process arguments. The requirement is the same: the same return value must always be returned.

This implies that a function is pure if the relation between its arguments (even if they are 0 arguments) and its return value is like that of a mathematical function. I already wrote that above. It simply boils down to the fact that you can execute the function as many times as you want: it will always return the same value (for the same arguments). This is only guaranteed if the function does not operate on a dynamic context in addition to its arguments (including system time, values from a random generator, etc.). Pure functions are just stupid simple things that deterministically transform their arguments into some value. I want these things to make up as much of my code as possible, because they combine like Lego and are pretty foolproof to handle. The value proposition of purely functional programming languages is that they semantically distinguish these pure functions from impure functions. In Haskell, I only have to look at the type of a function. If it is something like `t1 -> t2 -> ... -> IO tn`, then the function is potentially impure because it evaluates to a value that is wrapped in the IO monad, so to speak. For any other function in Haskell, I know for sure that it is pure. So I can write as much pure code as possible and then add a bit of IO code to interface the pure code with the execution context, or with other subsystems of the program where it is unavoidable to work with shared state. I can also do all that in Python. But in Haskell, the language semantics ensure that my code intended to be pure is really pure, and recognizably different from impure code.

To answer your question: purity implies referential transparency. I can understand that from the definitions. Conversely, the implication does not apply. I don't understand exactly why, but so far it hasn't been a priority for me to understand this.

> is the former impure and the latter pure?

Correct. You can recognize this by the data types:

f :: IO Integer

g :: GHC.ST.ST s Integer

You use do notation in both cases, but that has nothing to do with it. You can write not only IO code with the do notation, but all kinds of other code.

foo :: String

foo = do

    s <- "hello"  

    pure s
The do notation is only syntax to avoid the bind operator. Otherwise your functions would look like this:

f :: IO Integer

f =

    newIORef 0 >>= \ v ->  

    modifyIORef v (+1) >>  

    readIORef v

g :: GHC.ST.ST s Integer

g =

    newSTRef 0 >>= \ v ->  

    modifySTRef v (+1) >>  

    readSTRef v
> ...

> Do you mean because of the type?

Exactly. I can see from the data type that f1 is pure and f2 is impure.

> what makes the conclusions invalid for Python.

Consider this code:

def add(a: int, b: int) -> int:

    print("hello")  

    return a + b;
The function is impure because of the print. Without the print, it would be pure. But the type annotation would be exactly the same. The example is a bit silly, because the type annotations are not taken into account in Python anyway (unless you use typeguard)

In Haskell you cannot simply print something in a procedure of type `Int -> Int -> Int`. The type checker would reject it. If you want to do that, you have to type it as `Int -> Int -> IO Int`. You can then no longer simply use it as if it were typed as `Int -> Int -> Int`. It is then simply a completely different procedure: an impure one.

> does a pure language have to be [statically] typed?

Very good question! I don't know. I don't think it's formally a requirement, but I only know languages that handle it via the type system (IO, algebraic effect handlers, TEA).

> But so far I have never discovered a benefit of programming "purely" in Haskell that is not ultimately a consequence of what I call "referential transparency" above. Do you know one?

I think you're right about that. Purity and referential transparency somehow seem to be almost the same thing but not 100% congruent. I have a gap in my understanding at this point.

https://stackoverflow.com/questions/4865616/purity-vs-refere...

This seems to suggest that I won't be able to close this gap any time soon. In the end, it probably doesn't matter. I assume we both mean the same thing.


> This will be a bit more detailed. I apologize in advance.

No need to apologize. Thank you for your detailed response!


It's arguable that SQL is "basically applying and composing functions," given that its mathematical underpinnings lie in the relational algebra (hence relational databases). Further, while it's maybe technically correct to make statements such as, all programming languages are based on lambda calculus, a universal model of computation [1], this is about as precise a statement as saying that all mathematics is based on set theory. While it may be true, it may be too low a level of abstraction, and there are probably higher-order constructs or machinery that get closer to your actual domain of study or application (e.g. real numbers, functions, vectors).

I have mentioned downthread [0] a few characteristics that distinguish FP from other paradigms - namely, first-class functions and higher-order functions (the ability to pass functions to functions); referential transparency, "pure functions" without side effect, and equational reasoning (the ability to replace expressions with their value and keep the behaviour of the program identical); and a tendency to prefer parametric and ad-hoc polymorphism (generics and trait/typeclass bounds) to subtype polymorphism, the "inheritance" boogeyman of OOP.

Again as mentioned in [0] the "expression problem" [2] is a good model for discussing the ergonomics and tradeoffs of FP vs. OOP.

[0] https://news.ycombinator.com/item?id=41450851

[1] https://news.ycombinator.com/item?id=41452258

[2] https://wiki.c2.com/?ExpressionProblem


The distinguishing characteristics don't distinguish FP from anything. Those are features like first class functions that I would expect to find support for in Python or C++, for example. And it isn't possible to write a useful program that doesn't have side effects because there would be no IO.

It is a case of the paradigm not really meaning anything. It is an arbitrary bunching of language features and code properties that doesn't provide a particularly useful guide. Anything that is a good idea in one paradigm is also a good idea in any other paradigm too.


It's worth noting that both languages cited (Python, C++) are multi-paradigm and therefore contain aspects of both OOP and FP; there are very few languages that are exclusively object-oriented or functional. The naming of the functools [0] stdlib package for Python's higher-order functions is illustrative in this respect, and there is an entire HOWTO for "implementing programs in a functional style" [1] in the Python docs.

If you want to see object-orientation taken to the extreme, and how it makes writing programs in a functional style extremely unergonomic, you can read old-school Guava code from Java 6 and try to decipher all the line noise; there is even a caveat in its official documentation [2] that states,

    functional programming without Java 8+ requires awkward and verbose use of anonymous classes.

    Excessive use of Guava's functional programming idioms can lead to verbose, confusing, unreadable,
    and inefficient code. These are by far the most easily (and most commonly) abused parts of Guava,
    and when you go to preposterous lengths to make your code "a one-liner," the Guava team weeps.
> It is a case of the paradigm not really meaning anything. It is an arbitrary bunching

This is a case of throwing the baby out with the bathwater; just because we can't come up with a perfectly precise definition doesn't mean that we should dispense with rough heuristics and (perhaps crudely drawn) boundaries that serve as a useful clustering of related features. Try asking a machine learning or computer vision algorithm for the criteria it uses to categorize something as a "dog;" it's unlikely that you'll come up with something as precise as the definition of the Platonic ideal of a dog, but we can still use its model to recognize dogs with a reasonable degree of accuracy.

[0] https://docs.python.org/3/library/functools.html

[1] https://docs.python.org/3/howto/functional.html#small-functi...

[2] https://github.com/google/guava/wiki/FunctionalExplained


But if we take a holistic view here, languages that treat paradigms as a thing have been trounced in the marketplace of ideas. And to find an example of a non-functional programming language your first instinct was to talk about Java 7 - where there is a strong consensus that if you were going to create a new program today it should not be used. We're up to Java 22.

The attempts from decades past to draw a distinction between OOP and FP paradigms has largely failed. Consensus seems to be that whatever is concise and easy to understand is best and attempting to force code into a paradigm is inferior to looking at specific individual features and techniques in context of the problem being tackled.


> "Functional Programming" doesn't have a precise definition at all as far as I know, so it is likely it doesn't refer to a real thing. Most of the paradigms are similar as they don't seem to actually mean anything.

> The distinguishing characteristics don't distinguish FP from anything.

> The attempts from decades past to draw a distinction between OOP and FP paradigms has largely failed. Consensus seems to be that whatever is concise and easy to understand is best and attempting to force code into a paradigm is inferior

This last statement is a marked departure from your original, first assertion which is a descriptive claim that functional programming, and indeed all paradigms, don't actually exist or mean anything; in it you instead make the normative claim that "attempting to force code into a paradigm is inferior," on the practical bases of concision or comprehensibility.

Moreover, if paradigms don't actually exist as per your first assertion, I find it difficult to understand what exactly it is we are "forcing code into" in your second statement. Clearly there exists some cluster of related features you are referring to when you point out the pitfalls of trying to pigeonhole your problem into a certain style of solution. As mentioned, Python even has a name for this cluster, and it's called the `functools` module, which lives alongside other implementation styles (OOP, procedural, etc.) in its multi-paradigmal space of solutions.

> And to find an example of a non-functional programming language your first instinct was to talk about Java 7 - where there is a strong consensus that if you were going to create a new program today it should not be used. We're up to Java 22.

The claim was basically that OOP and other paradigms don't exist; to refute the claim I supplied a counter-example of a language that leans so heavily towards the OOP side of the spectrum that writing FP code within it is prohibitive. Just because that language has been updated to incorporate techniques from other paradigms doesn't mean that the OOP paradigm ceased to exist once Java 8 came out. Just because following a "purist" FP or OOP approach is impractical does not mean that such categorizations don't exist, even if only as a limit or extreme to which varying languages tend to with varying degrees of resemblance. Java 22 can be considered mixed-paradigm, but it's probably still more OOP than Racket Lisp.

There is a difference between claiming that a distinction does not exist, and that drawing distinctions is an inferior or impractical approach.

In general I agree with the sentiment that one should be comfortable working with multiple paradigms, because as per the "expression problem" a strictly FP or OOP approach comes with tradeoffs, and which tradeoffs should be made depends on the particular problem being tackled.


Here's a good enough definition: "The same input yields the same output".

> "functional programming is a programming paradigm where programs are constructed by applying and composing functions"

This is only seems like a useless truism if you take 'function' to mean 'method' or 'procedure'. If you nail down 'function' to sameinput->sameoutput then it starts to make more sense.


No. You missed it the meaning. The paradigm is referring to the fact that the ENTIRE program must be constructed this way.

So let's say you have this in your code:

   x = 1
   b = x + 2
   x = b + x
You have composed three procedures here in order. This is illegal. You must ONLY construct the programs via composing functions there can be no other way.


Not so fast. This next is legal Haskell:

  let x = 1
      b = x + 2
      x = b + x
  in <do something with b and x>
The third line creates a second binding of x which shadows the first binding, and shadowing is considered by most people to be within the functional paradigm. What is not considered functional by most people is assigning a variable in a loop although even here you can do it in Haskell. Specifically, you can call readIORef and writeIORef every loop iteration, but last time I tried, that was about 1000 times less efficient than in an imperative language.


The intent of my syntax is to show mutation and procedures not variable shadowing and let in.


I don't think I missed anything. You presented non-functional code as an example of code which isn't functional. There's no contradiction.

And good riddance, too! I'm so sick of seeing this anti-pattern at $DAYJOB: First, create a thing which isn't what you called it (e.g. 'result'), and then mutate it in place until it is what you called it.


It's funny you say that because I'd say FP has a much more well defined mathematical foundation compared to OOP. In fact, before it became as dominant as it is today, it was often criticized as being too scholarly and theoretical. It's built mostly on the foundations of Lambda Calculus (and maybe some Type Theory).

I don't think OOP ever had nearly as strong of a mathematical/scholarly foundation as FP has


> It's built mostly on the foundations of Lambda Calculus (and maybe some Type Theory).

Can you describe a programming language that isn't based on lambda calculus? It is a universal model of computation.


In that sense, every programming language (as well as lambda calculus) is based on Java, since Java is a universal model of computation. So I don't think it's very useful to take "is based on" to be as broad as "is interpretable using".


Conspicuously absent in that thought is an alternative way to interpret "is based on". Because I suspect that in most senses that FP is based on lambda calculus we probably can claim it is also based on a subset of Java. Everything useful is multi-paradigm these days, programming paradigms turned out to not be a useful lens for developing useful programming languages.


> Conspicuously absent in that thought is an alternative way to interpret "is based on".

"Has designers who consciously adopted significant ideas from" seems good enough for me.


That is a path-dependent property though. So if we interpret it that way we can't tell if a language is based on lambda calculus without interviewing the author.

Indeed, the definition admits that we could have two almost identical languages but only one of them is based on lambda calculus. I don't think it is a reasonable way to interpret "is based on", it requires too much knowledge of history. It is more proper to have a technical definition that is rooted in the language itself.


C'est la vie: it's always going to be a very fuzzy concept when applied, short of broadening it until it's unrecognizable. Not every property of a thing needs to be perfectly decidable with no room for guesswork.

To put it pithily, "This popular notion is provided 'as is', without warranty of any kind, express or implied, including but not limited to fitness for a particular purpose." If you really want a property that's fully intrinsic, I'd suggest disregarding fuzzy 'based on'-ness and instead considering some particular aspect you care about, e.g., "How difficult is it to express such-and-such a technique in this language?"


FP is math. There's not much distinction. The language of math is FP.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: