I saw some stuff about cursors as a way of displaying/modifying only a part of the global state. But it wasn't clear to me whether this is a formalized way of only loading part of the global state, or if cursors still require that the entire underlying "global state" is loaded into memory.
--
To give more background, let me give more specifics over what I'm thinking about when I ask this.
I've been working on a JS accounting app. One thing about accounting is that the balance of an account is dependent on every prior transaction that the account has ever had. So modeled in functional terms, it seems like you'd have to make this a tree of depth "N" where N is the total number of transactions in the account. It seems like you have to model it this way to get the data dependencies right (a change in an early transaction affects every subsequent transaction).
A functional approach with such a deep tree seems to break down (at least from the relatively little I know about Om). For one, I don't want to have to load every transaction into memory. Even though the most recent transaction depends logically on the first one, I don't want to be forced to load the first one just to show the most recent one. One possible way to work around this is to collapse all transactions prior to the ones I want to display into a single "fake" transaction that stores the accumulated balance as of this point. But now my in-memory model has to differ from my real model, and as the two diverge things start to get more complicated.
A bigger problem: I can't think of a good way, with this design, to indicate to the data layer which transactions the UI actually cares about and wants to have loaded into memory. If the tree has only one logical root (the most recent transaction) and is logically N transactions deep, how does the UI indicate which of those N it actually cares about? Maybe Om cursors can do this somehow?
I've been using a more traditional mutation-based/observable design, which seems to be working well. The UI can "subscribe" to certain transactions/balances, and this serves as a signal to the data layer about what needs to be loaded and what derived values need to be kept up to date. The data layer can be smart about how many transactions it actually needs to load to compute what the UI cares about.
I feel like a lazily loaded datastructure would solve this problem - you only need the last k transactions, then it only loads the last k transactions...
With regard to Om or Clojure I'm not sure of the specifics of such an implementation, but doing something like that with other languages that have functional programming support is one of the Major Selling Points.