Hacker News new | past | comments | ask | show | jobs | submit login

It makes me feel so sad that the "state of the art" in front-end development is apparently rerunning your render code on every little update. Yes, I know there are shortcuts to making this more efficient, but in essence the technique remains inelegant in the sense that it does not extend well to other parts of the code (that runs computations that might also partially, and not fully, change on an update).



I think you will find that it does extend well to other parts of your code. Code that is idempotent allows you to reason much more effectively about what it will do. It always does the same thing and does not introduce side effects. It also allows you to test it much more easily. In fact, doing computations is one of the places where being idempotent will really clean up your code.

It may seem like a new fangled idea, but actually it has been around for quite a long time. It's just that it didn't really get out of academia and the few shops doing functional programming until lately.


Not to mention idempotent code is trivially memoized, allowing you to skip computations you couldn't previously reason about. A function that produces a value is always comparable to another value — a function that modifies some state requires that you have a way to observe that the world changed in order to skip doing it again.


This pattern of the virtual DOM has some parallels with functional programming. At first look, using functional programming it looks like you're sacrificing the performance that you gain from mutations -- which to some extent is true. But when you look deeper, you'll find that the data structures for functional programming languages have been adapted to work in a very performant way with functional programming patterns.

In the same way, when you look closer at the virtual DOM model, you'll find that you can optimize the usage very well. For example, when you use React with immutable data structures (e.g., Immutable JS), deep equality comparisons can be done very quickly to reduce renders to the minimum subtree required, and to issue the renders for a given change in a batch operation.

With React, you often end up with even faster code than using manual DOM manipulations. Of course, your performance will vary depending on how you implement your code. Depending on which model you're using, and how you program, you may end up with a faster or slower product with either methodology.

But at the end of the day, the performance gain from running code using state mutations just doesn't seem worth the effort of having a significantly harder time reasoning about the code, and having to deal with the challenges of scaling a project where the complexity grows much faster with each line of logic.

And my feeling is that due to the time invested in managing a more complex project that deals with mutations, the "performance per engineering hour" gained for projects using a virtual DOM is more favorable for many projects.


I think the main problem with your argument is that you assume that the world looks like a shallow tree. For UIs this may be, to a great extent, true. But for non-frontend code, the world looks more like a deep DAG (directed acyclic graph).

Also, suppose I have a list of thousands of elements. Now suppose one element is added to the list. React will still perform a comparison operation on all of those thousands of elements.


> suppose I have a list of thousands of elements

That's going to be a problem regardless. You shouldn't have so many elements on one page (can a user even parse through so many at once?). Use pagination or occlusion culling to show a few at a time instead.


Premature optimization is the root of all evil. Using immutable objects means that the overhead for checking a component that has not changed what it's rendering is a couple of function calls plus an object identity check. Modern JS can do that very, very fast.

And, if you find that it's still too slow, there are other strategies you can employ to fix it without having to turn your whole application into a stateful soup. Odds are, though, that this is not going to be your bottleneck.


>> React will still perform a comparison operation on all of those thousands of elements.

Using the `shouldComponentUpdate` API alleviates the problem, doesn't it? https://facebook.github.io/react/docs/component-specs.html#u...


Eh, kind of. If you're rendering, for example, a Table with a lot of TableRows and change one of the values in the data array being passed to Table.props, you'd return true from Table.shouldComponentUpdate, and then each child TableRow would need to run its shouldComponentUpdate, even though only a single one really needs to update. So the argument is GP is making is that it's more efficient to directly update that single DOM element rather than update the data and then perform the calculations to determine that we need to update that single DOM element.


True in theory, but as long as each TableRow implements the PureRenderMixin and the Table render itself is efficient, you're going to need a lot more than a thousand rows before React has any trouble hitting 60fps in my experience.

But if you can't meet both those conditions, that sort of thing definitely can get quite slow.


Are modern CPUs really going to choke on a few thousand comparisons?

My impression has always been that, for any constant-time operation, you're gonna need to start getting into the 100Ks or even the millions to start noticing, but I don't have the data to back me up here :/


React is okay for normal form based stuff, but it breaks down very quickly with many (<200ms) renders.

I like the abstraction, it is super easy, has a small API surface and most of the time it is "fast enough". But it's no panacea. Often I have to throw D3 in, to get "realtime" stuff done without blowing up the browser.


Why is it considered inelegant? If you make your 'render code' stateless, then your issues lie elsewhere other than your render code.


That's actually what I love about React apart form using hierarchal components as building blocks.

I set the state, and the lib handles the rendering for me. Makes things much easier and is also superior for building UI than e.g UIKit.


> That's actually what I love about react apart form using hierarchal components as building blocks.

This is interesting to me - I've become a bit wary of the hierarchy of responsibility as well.

Seems like a lot of it is stemming from the fact that there certainly is a hierarchy within the DOM. (This is one of the reasons I was so excited about GSS, which gives the ability to build a flat dom). But what are the solutions otherwise?


Yup, but it's important to know that the building blocks in JSX isn't the actual DOM, it's some sort of a descriptive explanation of it. That's what's being used to diff.

If I understood it correctly.


Hand-written code will always beat a framework; React isn't trying to be faster than domain-aware DOM manipulation.

Instead, React seeks to be the state of the art in maintainable view architecture. One of its secondary goals is non-sucky performance, which it does pretty well.

Most devs aren't working on apps where React is the performance bottleneck. If you are, then, well, don't use React.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: