Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, that would be good to add to the docs.

Interaction nets are an alternate model of computation (along the lines of the lambda calculus or Turing machines). It has several interesting properties, one of the most notable being that it is fundamentally parallel. https://en.wikipedia.org/wiki/Interaction_nets https://t6.fyi/guides/inets

I think there are many potential applications for interaction nets in parallel & distributed computing, among other fields; and such applications will need a language – hence Vine.

(re your edit – that's fun lol; Totally Cubular is one of my favorite projects)




Do interaction nets have any interesting properties that make them useful for distributed computing? Think IoT, edge computing, embedded, that kind of thing?

And synchronization primitives?

I wanted to also say I loved reading the documentation. Your idea of places, spaces and values feels like a waaay more intuitive naming scheme for than what's common in CS.

The time-travel example also felt oddly intuitive to me. I don't really care that it uses black magic underneath, it's quite elegant!


> Do interaction nets have any interesting properties that make them useful for distributed computing?

Of course, the intrinsic parallelism is useful there (especially since it is emergent from the structure of the program, rather than needing to be explicitly written). In terms of other interesting properties: interaction nets don't rely on linear memory, instead being based on a graph, which can naturally be chunked and distributed across machines, with synchronization only being necessary when wires span different chunks.

> And synchronization primitives?

The initial answer to this question is: interaction nets don't need synchronization primitives, as all parallelism is emergent from the net structure.

Now, in practice, one may want some additional synchronization-esque primitives. For example, a parallel shortcircuiting or is not expressible in vanilla interaction nets (as all computation is deterministic, and such an operation isn't deterministic). There are extensions to interaction nets for non-determinism, that allow some of these use-cases, which start to look more like synchronization primitives. Vine doesn't support any of these at the moment, but it may in the future.

> I wanted to also say I loved reading the documentation. Your idea of places, spaces and values feels like a waaay more intuitive naming scheme for than what's common in CS.

I'm glad to hear it! I spend a lot of time trying to come up with good names for things, so it's nice to know that that pays off. Though I suppose it's not too hard to improve on the status quo, when it's 'lvalue'/'rvalue', lol. (I did, however, steal 'place' and 'value' from Rust.)

> The time-travel example also felt oddly intuitive to me. I don't really care that it uses black magic underneath, it's quite elegant!

Yeah, the 'time-travel' stuff sounds wacky, but I do think it can be really intuitive if one is willing to accept it. I wrote a really cool algorithm a little while ago that really uses this 'time-travel' idea; I [wrote about it](https://discord.com/channels/1246152587883970662/12461564508...) on Discord. (I should really type that up into a blog post of sorts.


This is something I never got through my annual rechecking of the interaction nets wiki page, cause isn't the lambda calculus also fundamentally parallel? Are nets even more?


Both lambda calculus and interaction nets are confluent. That is, for a term that can be normalised (i.e. evaluated), one can obtain the same answer by performing any available action in any order (n.b. if the chosen order terminates). For example, for `A (B) (C (D))`, I can choose to first evaluate either `A (B)` or `C (D)` and the final answer will be the same. This is true in both systems (although the reduction in interaction nets has more satisfactory properties).

The key reason one may consider interaction nets to be more parallelisable than lambda calculus is that the key evaluation operation is global in lambda calculus, but local in interaction nets. The key evaluation operation in labmda calculus is (beta) reduction. For instance, if one evaluates a lambda term `(\n -> f n n) x`, reduction takes this to the term `f x x`. To do so, one must duplicate the entire term `x` from the argument to perform the computation, by either 1) physical duplication, or 2) keeping track of references. Both are unsatisfactory solutions with many properties hindering parallelism. As I shall explain, the term `x` may be of unbounded size or be intertwined non-locally with a large part of the control graphs of other terms.

If `x` is simply a ground term (i.e. a piece of data), then it seems like either duplication or keeping track of the references would be an inevitable and reasonable cost, with the usual managed-language issues of garbage collection. If one decides to solve the problem by attempting to force the argument to be a ground term, one would find the only method to be to impose eager evaluation, evaluating terms by always first evaluating the leaf of the expression, before evaluating internal nodes in the expression. Eager evaluation can easily become unboundedly wasteful when one strives to reuse a general computation for some more specific use cases, so one may not prefer an eager evaluation doctrine.

However, once one evaluates in an order that is not strictly eager (e.g. lazy evaluation), the terms that one is duplicating or referencing are no longer simple pieces of data, but pieces of a (not necessarily acyclic) control graph, and any referencing logic quickly becomes very complicated. Moreover, the argument `x` could also be a function, and keeping track of references would involve keeping track of closures over different variables and scopes, which complicates the problem of sharing even further.

Thus, either one follows an eager evaluation order, in which most of the nodes in a term's expression tree are not available for evaluation yet, and available pieces of work for evaluation are only generated as evaluation happens, which imposes a global and somewhat strict serialised order of execution, or one deals with a big complicated shared graph, which is also inconvenient to be distributed across computational resources.

In contrast with lambda calculus, the key evaluation operation in interaction nets is local. Interaction nets can be seen as more low-level than lambda calculus, and both code and data are represented as graphs of nodes to be evaluated. Thus, a large term is represented as a large net, and regardless of the unbounded size of a term, in one unit of evaluation, only one node from the term's graph is involved.

Given a graph of some 'net' to be evaluated, one can choose any "active" node and begin evaluating right there, and the result of computation in that unit of evaluation will be guaranteed to affect only the nodes originally connected to the evaluated node, no referencing involved. Thus, the problem of computation becomes almost embarrassingly parallel, where workers simply pick any piece of a graph and locally add or remove from that piece of the graph.

This is what is meant when one refers to interaction nets being more parallelisable than lambda calculus.


That's a great explanation.


Isn't any pure language easy to parallelize? What advantage do interaction nets offer over a parallelizing compiler for a purely functional language?


You may be interested in my other comment. To put it simply, pure languages remove serialisation of false data dependencies, exposing data parallelism, and interaction nets remove serialisation of false control dependencies, exposing control parallelism.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: