Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Can functional programming be liberated from the von Neumann paradigm? (2010) (conal.net)
92 points by bkirwi on Jan 1, 2015 | hide | past | favorite | 36 comments



It's worth nothing that functional programming is hard not because of the functional paradigm, but due to the constraints you have while solving a problem (memory, for one).

This requires changing the simple, composable functions that you could reuse forever, to complex ones that have to split the work into multiple stages.

Another source of inherent complexity which has nothing to do with the solution of the problem itself, is the handling of the input or runtime issues. Input cannot be guaranteed to be always coherent, and runtime issues may arise while running (user interrupt). At this point, interrupting the computation is easy, but as an user I want maybe:

* know why the function stopped, with a meaning answer (not just a stack trace)

* know the location of the error in the input data

* the possibility to resume the computation

If you've ever programmed functionally, you know how hard is to do something as simple as give meaningful error messages buried into a series of fold/map calls.

Keeping state is a simple (and I would say equally elegant) solution to this recurrent problem. Please consider that I'm saying this from a functional programming perspective (as in: objects as localized state containers not necessarily breaking the purity assumption).

Another issue is that we, as humans, are based on a stateful world. Stateful user interfaces are sometimes more efficient due to the way _we_ work. This can be seen in something as simple as "now select a file", which brings up a dedicated section of stateful code to navigate a tree. As such, you are constantly faced with the problem of keeping state.


You can use the Either monad to figure out where the computation failed and give meaningful errors. Or the Writer monad, to add logging to the computation.

Being able to resume the computation is possible, due to referential transparency - if you're able to serialise the monad and persist it to a file, you should be able to resume any computation at an arbitrary point in time. The Cont monad might also help in this case, but I've never used it myself: https://hackage.haskell.org/package/mtl-2.0.1.0/docs/Control...

Also, you can use monads to keep track of state in a functional manner - take a look at the State and ST monads.

What I'm trying to say is that you've indeed identified some of the sore aspects of functional programming, but these problems are not inherent to the programming model - they're all solvable in theory, with enough time and effort. It's a different paradigm, so some wheels have to be reinvented or adapted to it.

I agree with your last paragraph: stateful user interfaces are fundamentally better for us humans. Indeed, I think that mixing both imperative and functional programming is the way to go - have a stateful/imperative "chassis" around your functional "engine" and get the benefits from both.


It sounds like you're saying those things are not an inherent limit of the pure functional programming model, and support it by listing ways to fix it that break the functional purity.


> and support it by listing ways to fix it that break the functional purity.

I don't see anything in his post that does anything of the sort. Can you be more specific?


The only way you're breaking functional purity is by doing IO, which none of those suggestions are doing.


The point of `IO` is that using it doesn't break referential transparency.


Several of them write to a file or store state in one. That's impurity, even if done "nicely".

It just feels like saying "don't talk to the outside world that way, do it this way".


The only part where I suggested to store a monad in a file is when I proposed it as a way of pausing a computation and resuming its execution at a later point in time. My mistake for mentioning the word "file", but I meant it in the abstract sense.


Amen. You might want to check out my project that attempts to rationalize mutation through time management rather than avoid it:

http://research.microsoft.com/en-us/people/smcdirm/managedti...

Anyways, there are many other ways of fixing or transcending Von nuemann, pure functional might not be it.


This is the second time I've looked at that page (saw it on LtU several months ago), and I find it baffling.

In particular, what escapes me is what those videos represent. Why are they showing typing, rather than what some code does? Are you demonstrating a programming language or a live coding environment? The first two paragraphs seem to discuss the former, but the videos look like the latter. And if it is the latter, I can't figure out what the language is doing because I'm so hung up on the typing/deletion/live feedback.

The only way I've been able to get anywhere is by looking at the paper. You might want to link to the paper or some other overview rather than this page.


Both. It is a live programming (not coding) environment in the spirit of Bret Victor, but it is made possible by automatic time management. You can't really show that however, so the essay focuses on the programming environment making the benefits more tangible while the paper focuses on the programming model (I point people at the essay first because many don't want to read papers). The paper is linked at the top of the essay.


We do have a functional language that's free from imperative constraints - it's called mathematics; everything runs on it.

We just don't have the source code.


Can someone please explain: " An FP system cannot compute a program since function expressions are not objects. Nor can one define new functional forms within an FP system. (Both of these limitations are removed in formal functional programming (FFP) systems in which objects "represent" functions.) Thus no FP system can have a function, apply, such that apply: <x,y> = x :y because, on the left, x is an object, and, on the right, x is a function. (Note that we have been careful to keep the set of function symbols and the set of objects distinct: thus 1 is a function symbol, and 1 is an object.)"

I understand what it says. I don't understand why

apply:<x,y> === x:y

doesn't work in the construct of functional programming?


This page doesn't load for me, which is a pity.

Why limit the question just to functional programming?

This applies just as much if not more to imperative programming, at least in functional programming you have the option to execute any pure function in parallel on some independent chunk of hardware.

Whether imperative programming can be 'liberated' from the von Neumann bottle-neck is a much harder problem.

In the end both will still have to deal with Amdahl's Law, so even if you could get rid of the 'looking at memory through a keyhole' issue you're going to have to come to terms with not being able to solve your problem faster than the sequential execution of all the non-parallizable chunks.


> This page doesn't load for me, which is a pity.

https://web.archive.org/web/20131225040636/http://conal.net/...

> Why limit the question just to functional programming?

There was already an article about the general case :)

http://web.stanford.edu/class/cs242/readings/backus.pdf


Sorry about that. My blog seems to be working now. I don't know what happened earlier. I think the server I've been using is underpowered. Hopefully I'll get my site migrated soon.


Here is the cached version of the article: http://webcache.googleusercontent.com/search?q=cache:CHoIZZi...



> This page doesn't load for me, which is a pity.

It only loads in browsers written in a purely functional language!


A pure function cannot always be executed in parallel. For that it might need access to shared data. On non-von Neumann machines this may not be possible.


Wouldn't such a function not be pure? Or am I not understanding the notion of purity?


Of course it can be pure, so long as the shared data is not mutated. Functional languages such as Haskell don't copy values all over the place when calling functions, they pass in pointers.


But that's an optimisation. Which can be switched off to get the benefits from parallelism. This is a tradeoff that programmers would traditionally have to consider, but with functional programming the compiler can figure it out for us.


That's assuming that "figuring it out for us" is decidable. Many of the things we wish the compiler could do for us are not.


"It's not decidable" is not a guarantee that humans can do better than a compiler. It means any approach will sometimes be wrong, but a compiler might well (in principle) be wrong less often than a human.

This is separate from the "a human has a lot of additional context that's hard to express" argument, which is valid but has nothing to do with decidability.


> A pure function cannot always be executed in parallel.

Try doing that to eg calculating chained hashes.


This is a really interesting article (and a blog I want to come back to). Thanks, OP.

What Haskell seems to achieve, and I'm not an expert on it yet, is an incentive system that encourages small functions especially when you're "in" a monad, because of the "any side effects makes the whole function 'impure'" dynamic as seen in the type system (also known as "you can't get out of the monad"). Of course, it's sometimes a pain in the ass to thread your RNG state or other parameters (that remain implicit in imperative programming) through the function, and that's where you get into the rabbit hole of specialized monads (State, Reader, Writer, RWS, Cont) and, for more fun yet, monad transformers.

I think that the imperative approach to programming has such a hold because, at small scale, it's far more intuitive. At 20 lines, defining functions feels dry and mathematical and we're much more attracted to the notion of doing things (or, making the computer do things). And, contrary to what we think in functional programming, imperative programming is more intuitive and simpler at small scale (even if more verbose). It's in composition and at scale (even moderate scale, like 100 LoC) that imperative programming starts to reach that high-entropy state where it's hard to reason about the code.


The most important feature of an effect typing system is that it's a pain in the ass. More technically, the complexity of effects is (usually) infectious so a type system either lies to you about effects or is really obnoxious.

The upshot of either thing is that you're heavily encouraged to shatter code into the purest composable fragments you can discover.


This is an excellent synopsis of my experience with Haskell.


At 20 lines, defining functions feels dry and mathematical and we're much more attracted to the notion of doing things (or, making the computer do things)

I think the "feels dry" problem can be solved by always writing functions in an environment rich with sample inputs and which passively calls your function for you! The text of a function would be the central component of a screen with all kinds of 'doo-dads' poking, prodding, testing, exercising that function in all kinds of interesting combinations.

What this does is underscore the concrete utility of a function, which is fundamentally tied to the application to concrete arguments (and it's ability to cooperate with other functions. Function application always results in a flurry of instructional von Neumann-esque side-effects; and I don't think that can be avoided without violating the Second Law.


Haskell is lazy, which paradoxically means that its execution is sequential. In order to be truly liberated the execution order needs to be unspecified.


Lazy and eager evaluation are both sequential evaluation schemes, and are properties of language implementations, not of languages. Haskell is thus not lazy (nor is any other language). Rather, Haskell is a non-strict language, meaning that lambda is just abstraction, not abstraction-plus-strictification as in "strict" languages. Non-strictness does not imply sequential evaluation or indeed any particular evaluation order. (Ditto for strictness.) For instance, speculative evaluation is a parallel strategy that can implement non-strictness.


> Haskell is lazy, which paradoxically means that its execution is sequential.

I don't understand what you mean.


Although I don't agree that "lazy => sequential", what he/she probably means is that Haskell's lazy execution model sets constraints on program execution, in analogy to an imperative language, which also sets contraints on execution order.

Advantages of an unspecified execution order: more compiler optimizations allowed/possible.

Disadvantages: even harder to reason about. In fact, some functions may or may not terminate, depending on the whims of the compiler.


Termination (really, non-bottom-ness vs bottom-ness) is a semantic (denotational) property, independent of the whims of any correct compiler. Execution/evaluation order, on the other hand is an implementation (operational) choice. For an implementation to be correct, the termination or nontermination of generated code must be agree with the semantics of the language.


In principle, you can leave denotational properties unspecified in the language spec...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: