What Rich Harris said sounds very nice and interesting, but I'd take issue with it being "not the best solution for a scalable UI." As long as the slowdown caused by the Virtual DOM remains below limits of human perception, it doesn't really matter whether something can do without Virtual DOM or not.
I’ve worked on React applications where a single key press would cause lags over 300ms. The React profiler wasn’t able to show any latency in my render functions and using the browser profiler I saw that all time was spent somewhere inside React’s internal.
There’s some real overhead of React’s virtual DOM.
I've worked on dozens of large React codebases, and never once came across a performance problem that was caused by VDOM overhead. Performance lag was always caused by bad application design. In most cases, the performance lag was due to over-rendering caused by a badly designed component architecture and state management system.
That's bizarre; I've spent a lot of time spot-optimizing React apps and never had something like that happen
The VDOM definitely carried overhead in my case, but it was easy to profile and optimize (with the React tools). In many cases, reflowing the sheer size of the DOM we were generating was a bigger bottleneck than the time it took React to render it in the first place
I've seen it a lot, personally. I've even seen instances where the input would delay by over a second. Really wild stuff.
Can a better design help to mitigate those issues? Sure. But I don't like having to wonder whether it was my own design, or if it's something internal to the library.
I think we're talking about two different things- I've definitely seen that much delay before, but it always came down to inefficient rendering logic and/or components that re-rendered excessively often, both of which are easy to detect in profiling and usually not too hard to solve
This is the part I thought I was bizarre:
> The React profiler wasn’t able to show any latency in my render functions and using the browser profiler I saw that all time was spent somewhere inside React’s internal
There's nothing inherently slow about React internals. It does exactly the work you tell it to do. If you tell it to do too much unnecessary work, it'll happily go do that.
Svelte avoids this by magically not doing the unnecessary work. That doesn't mean it's faster, it just means your design needs to make different kinds of considerations.
I'm going to call shenanigans on this. I've worked on dozens of applications (my own and others) with hundreds of thousands (and even millions) of DOM nodes. I've never seen a problem like this. React simply doesn't rerender things that you don't tell it to. It's easy to try to do something clever and have it cause unnecessary (and expensive) rerenders, but that's not the fault of React internals being slow, it's the fault of the userspace code doing more work than it needs to. Of course the profiler shows the time is spent inside React, it's doing all the work your code told it to do.
React also gives you great tools to avoid and remediate this (memoization wrappers, the dev tools highlight elements that rerender, linters that check you're using hooks correctly), so it's really hard to say that this is a slowness that's endemic to the vdom.
I would be very, very surprised to see React code that can take 300ms in the virtual DOM unless it's deliberately designed to do so. To be honest, I've built a whole lot of stuff in React and it's not even immediately clear to me how I would deliberately design an app to take 300ms in the virtual DOM.