Yes. They should. Currently DOM works in an imperative, immediate way. Kind of like a Basic program or OpenGL immediate mode. After a write to a property you are guaranteed to get the same value back after you read the property. The dependent properties are also guaranteed to be updated immediately. Surprisingly, this imperative way of programming is actually not efficient at all because subsequent writes and reads may cause reflow.
To prevent this the programming model should be changed. There are two ways that I can imagine.
- Introduce "DOM batching mode". In this mode remove the immediate mode guarantees. If you specify an element width you are no longer guaranteed to read it back until the layout occurs. So store your intermediate element width somewhere if you want to use it. Of course you don't need to specify batching mode for all the DOM tree. Just the majority of it that doesn't require custom layout.
- IIUC the majority of times that you need to perform multiple reads and writes of the DOM properties is due to special layout requirements. In some cases CSS layout may not be enough. There should be an API that allows to specify custom layout strategy for a parent DOM element. JavaScript should be fast enough. The additional benefit is that we would no longer need to wait for e.g. Flexbox adoption. Just roll your own.
It is obvious that we are trying to turn HTML into a GUI framework. So let's do it properly.
The problems we are facing with DOM have already been solved by multiple game engines and GUI frameworks.
> Introduce "DOM batching mode". In this mode remove the immediate mode guarantees. If you specify an element width you are no longer guaranteed to read it back until the layout occurs. So store your intermediate element width somewhere if you want to use it. Of course you don't need to specify batching mode for all the DOM tree. Just the majority of it that doesn't require custom layout.
Some sort of DOM-like buffer[1] that you could render into and then "flush"/insert, maybe?
You'd think that with a proper background rendering thread, they could get away with a retained scene graph maintained via dirty bits. But obviously, I'm missing something: what is it about the DOM that makes changes so expensive that they have to be batched via a virtual one?
Can't it batch layout calculations like is typically done in a retained scene graph? You don't do the layout calculations on each change as they occur!
Is it an artifact of the DOM API? In WPF, they have to maintain two sizes because of this: a set size (if specified) and a layout computed size that is filled when layout computations are done in batch. This adds some complexity (e.g. ActualWidth is not always equal of width, and so on), but the perf is pretty good.
> Dirty checking is slower than observables because you must poll the data at a regular interval and check all of the values in the data structure recursively.
You don't have to poll your dirty bits! When you dirty something, you put it into a dirty list/set. You only re-render if your dirty list/set is not empty, clean deeper elements before shallow elements, and its quite optimal.
> A virtual DOM is nice because it lets us write our code as if we were re-rendering the entire scene.
Totally: they are basically turning a retained model into a not-so-slow immediate model, which is a nice programming abstraction, but it is not a performance win over an efficient retained model.
> DOM operations are very expensive because modifying the DOM will also apply and calculate CSS styles, layouts. The saved time from unnecessary DOM modification can be longer than the time spent diffing the virtual DOM.
So layout calculations in normal DOM aren't incremental, but are made incremental in virtual DOM? Assuming this isn't related to batching, it sounds like the concrete DOM is just a bad implementation? Or does the virtual DOM avoid doing layout calculations at all and somehow magically fixes the layout when things change?