Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "Functional Programming reduces the surface area for Bugs" is a strong claim and can be proven

Is there really a formal proof or Software Engineering paper that proves this?

I was told this in my FP class in university, but it pretty much sold to me as gospel.

In practice I agree with the statement - I certainly feel there's an inherent "cleanliness" to FP.

But I also feel that the argument is not only about program correctness; many people ultimately conflate it with developer productivity. And here's where I feel that things fall apart a bit: I feel as if sometimes it's much quicker to do things with state, so maybe the time you save debugging is time you add elsewhere?



Intuitively you can derive that this is true informally past just an overall feeling.

It’s simple really. Functional programming is just imperative programming without one feature: mutability. Thus if functional programming is just regular programming with a reduced feature set it means it has the same error surface area as regular programming minus the surface area of errors caused by mutability. Hence by proof the error surface area is smaller.

Now think of of all the errors caused by initializing a variable as null and changing it later rather then immediately initializing an immutable variable with the correct value and you can intuit just how big the error surface area actually is.


>Is there really a formal proof or Software Engineering paper that proves this?

Yes (page 7):

https://arxiv.org/pdf/1901.10220.pdf


> Is there really a formal proof or Software Engineering paper that proves this?

I wonder if there is one such formal proof as well.

Intuitively it is trivial: functions a are a subset of all procedures, state can introduce unique bugs, these bugs are not found in functions, so you're dealing with a subset of all possible bugs.

Another intuition is this: By introducing state you increase complexity. A procedure in isolation is not necessarily referentially transparent, but a function is. You cannot reduce the procedure with it's evaluation at any given point in time, because it is 'connected' to the surrounding program via that state. Now you'd have to show that increased complexity introduces unique bugs.

I'm simply not equipped (yet) to make such claims, but I'd love to hear from experts on these matters. I know that you can formally verify stateful programs, so it is likely not an issue of what is possible. But I damn sure know it is much easier to reason informally about functions than about procedures, except if the procedure merely has local state.

> And here's where I feel that things fall apart a bit: I feel as if sometimes it's much quicker to do things with state, so maybe the time you save debugging is time you add elsewhere?

I can only speak for myself here, but yes certain algorithms are more intuitive if implemented imperatively. But I found that the set of these algorithms shrink over time by getting used to FP. Vice versa there are also algorithms that are much more easily written with functions. Then there is core idiom of the language you're using. If it is imperative OO, then writing functional programs can sometimes feel cumbersome and less readable.

There are many factors that may or may not apply as well. For example functions are easier to compose and decompose, since they are by definition simpler. However imperative procedures are sometimes easier to read "from top to bottom", because they enable a more real-world-y mechanical/visual mental model.


> certain algorithms are more intuitive if implemented imperatively.

FWIW, "Dancing Links" is a wonderful technique that would be worse than pointless in an immutable language.

https://arxiv.org/pdf/cs/0011047.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: