This is confusing to me. From the article it sounds like the javascript is run on the server producing markup. That markup is sent to the browser for rendering then the javascript is requested by the browser. When the javascript arrives it is run again on the browser to re-generate the DOM with event handlers attached. If that is correct then why is the javascript run on the server to begin with and not just sent directly to the browser?
Is the idea to take advantage of the server's horsepower to get something on the screen fast by sending pre-rendered HTML and then wait while the browser runs the code to basically re-create the page for interactivity?
(it's been a while since i've done traditional front-end web dev)
This post is part of a broader effort among emerging js frameworks to send less code down to the browser. It's not just bandwidth. The time to parse, compile, and execute the JS can take as long on a phone as the download itself
For a decent fraction of applications a large chunk of the code that's written is to generate non-interactive parts of the app (data fetching, wrappers, component markup, CSS) and the actual code for event handling is relatively small. Hacker News, for example, is a pretty common framework demo since it's simple to write. You need a bit of JS on the comments page for the vote buttons and the comment collapse but if you write HN in a natural way in most component oriented frameworks the bundle coming down is considerably larger and includes all the code for rendering the comments (for example) even though the comments never change and most hydration approaches will put the comments in both the markup and embed a JSON/JS chunk in the page so hydration and the client side render can happen with consistent data. This doubles the size of the page with the overhead of the data half running through some subset of the JS code machinery.
In the case of Qwik, the idea is to do a server side render and then lazy load everything on the client as it's needed. At least that's my understanding, I haven't used the framework beyond a toy project. There are other approaches but to use the HN example, you'd never download the comment collapsing code if you didn't click a collapse button.
You could avoid shipping down the JSX templates but only by promising to never render a comment on the client. That's basically the approach taken by this React Server Components proposal: https://github.com/josephsavona/rfcs/blob/server-components/...
Rails has experimented with this sort of thing in the past; it requires some deep integration with the HTTP server so that clients can request updates to specific chunks of the UI rather than just URLs.
I've written quite a bit of PHP and jQuery but with that approach there's a decision when you consider writing the code if you're going to implement the feature in PHP or in jQuery. The difference with the upcoming generation of JS frameworks is that the server and client are in the same language so the decision can be deferred or it can be determined automatically.
Astro, for example, is very conceptually close to PHP. Your code runs on the server, there's a code section where you grab content from the DB and a markup section where you template out the page, the lifetime of everything is one request, etc. What's changed is that you can put an attr on a component indicating you want that component sent to the client. No attr, it's old PHP, attr it's jQuery, and further the send can be triggered can be when the element enters the viewport or when it's clicked.
In the in-development Marko 6 runtime distinguishes between url/db derived "settled" data and client side/event handler state and only sends the latter to the client. If you have a paged list (for example) you can set up the prev/next page buttons to be links and you wouldn't have any JS sent down to the client despite the app being written with current SPA component ergonomics. If you'd rather have the data fetch and render client side you switch the data declaration from a const (dervied/settled) to a let (state) and all the rendering JS gets sent down.
I'm simplifying/omitting stuff but these are new and useful takes on old ideas. I think better takes on old ideas are some of the more effective advances in the state of the art.
Generally speaking, if you approach qwik.js from a 10,000 mile view like you'd normally approach other frameworks, you're going to miss the trees for the forest. One of the axioms where this framework is coming from is the idea of providing React-like developer experience, so yes, there's going to be templates expressed in JS.
Where it gets technical has to do with how the JS gets delivered to the browser. It starts with a small bootstrapping that sets up event delegation, sort of like `document.documentElement.addEventListener('click', becomeAwareOfClicks)`. This happens literally on the first chunk of HTML being streamed in, meaning that the framework is now already aware of clicks happening anywhere in the app, even before HTML <body> is available in JS.
Eventually some HTML will stream in with an attribute that indicates how a click event on an element should be handled. The framework can then quickly determine whether it needs to "hydrate" that event handler: a) if the event was captured by `becomeAwareOfClicks` and b) the intersection observer deems that element is visible and available to DOM manipulation, then c) it can download the relevant handler, meaning it triggers business logic at latest as soon as the intersection observer downloads the handler.
Note that at this point, no other JS has downloaded yet. Eventually it does download it, like every other framework, but the key point is that it can respond to events with actual business logic before the rest of the JS comes down the pipe.
What you’ve described is a complete and accurate description of the “hydration” approach which the article (also correctly) describes as pure overhead. The approach proposed in the article/implemented in Qwik dispenses with re-running the JS which already ran on the server and just carries on with the state it produced.
The SSR/hydration approach:
- Benefits SEO
- Usually benefits initial content on the screen
- Has a bunch of performance penalties after that as JS is loaded, parsed, and (usually blocking) recreates the data structure representing its initial state
Qwik’s approach (at least conceptually) skips that third point for everything until an interaction needs to respond to some state change based on an interaction, and does that with only the information needed for that event.
In a tweet thread, I described my mental model of this as effectively developing like I’m building an app where all the server/client UI code is shared, but the UX is like I wrote plain HTML with some jQuery or whatever to make a few elements interactive. What Qwik does is determine which parts of the server code need to be the compiled jQueryish code, but only calls it when needed.
An equivalent React codebase would re-render the entire page (well mount point) even if most of it will have no meaningful effect.
I think most importantly the server can cache requests (cache the generated HTML). This is especially important for public, mostly static pages that one might want to do SEO optimizations for anyways.
Servers are often weaker than many consumer computers anyways, so I don’t thin it’s because it can render faster than your own browser.
I don't think the purpose of SSR + hydration is to simply move the wait time from first render to server response, or that servers can somehow render faster. To fully yield the benefits you'd have to enable caching, so that the server spits out something without rendering and the client side can simply hydrate.
Caching is not something individual distributed clients can do, which is why the server is the only way able to reasonably take on this role. You can also easily configure nginx to serve just-in-time caching.
It is correct, except I think the browser generates the DOM from the HTML, and the JS just attaches event handlers to it.
The reason for the double work is that the context here is SSR/SSG:
«The re-execution of code on the client that the server already executed as part of SSR/SSG is what makes hydration pure overhead: that is, a duplication of work by the client that the server already did.»
In client-side rendering (CSR), there isn’t double work in rendering the HTML, since the client only does it.
Yes, the idea is to send pre-rendered HTML (by using SSR/SSG). The client doesn’t need to re-create the page/HTML, simply hydrate it (attach event handlers). But it is duplicate work and turns out to be a bit expensive to always do it on the initial load.
> If that is correct then why is the javascript run on the server to begin with and not just sent directly to the browser?
Just the same ordinary reasons to generate HTML on the server: it allows you to support HTTP caching (in a CDN or even browser caching), it (potentially) lets the browser start rendering content much sooner, and it will be viewable by user agents (bots, scrapers, search engines, etc., but also humans) that don't run JavaScript.
that makes sense, so if you're using http caching appropriately the server side "render" or running of the javascript happens on the first visit and then infrequently thereafter.
One reason I like SSR is that you need some form of SSR for public-facing websites anyway. Website previews (like in iMessage, Twitter, etc.) rely on Open Graph tags in the HTML, and these services expect the OG tags to be available without executing any JavaScript. Since you already need this step, you can make loading pages much faster if you inline any data you might have fetched from the client at view time.
You get to at least see the content much faster, it's just not interactive for a bit. The benefit is clearest when rendering pages at build time and then serving the HTML over a CDN
But even then, it's definitely still a problem vs sites that don't have to be hydrated. Some people see this approach as a best-of-both-worlds, but in reality it's still a compromise that has costs
> When the javascript arrives it is run again on the browser to re-generate the DOM with event handlers attached.
Not the DOM, but the virtual DOM. Instead of inserting elements (which is expensive), it can just walk the existing server-rendered DOM and attach listeners.
> Qwik is a new kind of web framework that can deliver instant loading web applications at any size or complexity. Your sites and apps can boot with less than 1kb of JS
Most developers would scoff at their own code written a decade ago, because we learn constantly, it's what makes this profession great.
So unless you're one of those people who just are consistently great, maybe the secret identity of Fabrice Bellard, you may want to consider this before making immature statements like this.
Besides, inexcusable atrocities being picked up by large parts of the industry (hi javascript), are pretty common and one could bet that they afford much more learning and real world value than perfect academic solutions never used by anyone
I was surprised when I saw his name attached to this project too. But goodness, having followed it for some time, are you ever wrong. If anything Qwik has reminded me professionally that people really can learn from their experiences and surprise you. Angular was in fact terrible. Qwik is not Angular or like it in any way.
The original Angular was, for its time, absolutely fantastic.
The state of the art has absolutely moved on but provided you were willing to understand its model (just like these days you need to understand React's model) it provided a power to performance ratio that nothing else in its class was capable of at the time.
I was on a team that sunk 9 months into its model for a major project and what I saw bears no resemblance to what you're describing.
It's a framework that hated the idioms of its host language, as if the problem with front-end development was that it didn't have the ceremony of Java and the attendant abstractions/"patterns" of a static manifestly typed language. The conceptual overhead alone was ridiculous (as famously described here: http://codeofrob.com/entries/you-have-ruined-javascript.html ), and the payoff in terms of performance was negative on mobile no matter what we did. The fact that I had to know what the digest cycle was is a testament to the leaky nature of the abstractions. The tooling, wow, as far as I could tell batarang was actively broken for a good chunk of 2014 and 2015 and nobody had better suggestions.
I've used a lot of libraries/frameworks/languages with their own baggage but I've never had an experience where the differential between what I was hearing and the actual experience was that large, to the point where it's one of the first things I think of when it comes to the hazards of social proof.
If I wanted something that heavy again circa 2014, I'd tell myself to just use Ember. Hell, I'd rather use jQuery than Angular.
No. It was not. It was jumbled mess of ad-hoc solutions. Many of the concepts that were core stuff of angular are completely forgotten now. It discovered nothing of value. I worked in a project that used it for almost a year. And none of what I learned about AngularJS was useful since then. Except from general notion of "avoid angular" and "treat frameworks that cram stuff into html with suspicion".
Whatever actual criticisms you have of Angular, do you really think there's anyone around who would be even more painfully aware of those criticisms than him?
If I'm understanding correctly, this is binding event handlers "just in time" instead of when a component initializes. Isn't that just a tradeoff between working the CPU at load time vs. working the CPU on user interaction?
This doesn't seem like a great tradeoff to me. Sure, maybe you save time during component initialization, but while that is happening the user is digesting the information anyway. Then once they make their decision to act, there's no extra delay to produce the next state. However, with a "just in time" event binding, now the user has to wait (slightly) longer after they've already made their decision, which seems worse.
Haven't dug too deep, but my understanding is that this doesn't bind event handlers just in time, but instead sets up event delegation from a tiny blocking bootstrapping script, to attach a top-level event handler that catches all events as soon as the first chunk of HTML streams in.
In addition, it sets up an intersection observer. Then depending on when an event happens, it might require downloading that one event handler piecemeal if the event occurred early enough during page load, or if the event is late enough, the action happens instantaneously because the intersection observer already downloaded the handler in anticipation that the user would interact with the element, it being visible and all.
The trade-off is that the download of every other JS thing effectively gets deferred due to fragmentation of how JS gets loaded in the page, but the cleverness of the trade-off is that in typical scenarios, most of that deferred code is not going to be activated by the user in the first place (or at least not in quick succession so as to overload network).
Yeah... This screams of people who never had bad networks. If you are going to add interactivity as a JIT thing, what happens when the user has a shitty connection? You give the impression that the page loaded to the user, but it didn't really load. This is just increasing by a lot the amount of connections the user will have to make. Its _more_ overhead with a bunch of http calls.
The goal here, as I understand it, is to make sites that can be built using fully interactive tooling whose main purpose is to be read, so fast initial readability is paramount and the alternative to JIT interactivity would be a page transition so the network problem doesn't actually make anything worse.
It may not be the best possible set of trade-offs for any particular application but it seems like a set of trade-offs worth exploring.
it's all to appease the Google black box. UX always takes a back seat to SEO. Because if there are no users, then no one to irritate with bad UX in the first place. If it weren't for SEO we would have all dropped SSR+hydration long ago. Absolutely no one likes unifying URLs and content and all that shit on two sides of a single app.
The company I work for is making a big deal about client-side hydration. We use nextjs for our sites. My director was asking about Svelte.
"Something i wanted to mention is both the React team and nextjs team are aware of this and are working on a solution to address needing to load Javascript on the client. Its called React Server Components
We can try it out today on a platform that supports a node environment. This is from nextjs docs. I have a few thoughts on Svelte, but just wanted to point this out!"
With Server Components, there's zero client-side JavaScript needed, making page rendering faster.
If you're into this sort of thing you should look into Marko. They're kind of obsessed about page load perf with the justification that it's good for e-commerce. They've also been into streaming, partial hydration, out of order rendering, etc for years as part of that general effort.
This article complains that downloading stuff is slow and then goes on to propose a solution where you have to wait for stuff to download before firing a scroll handler.
Totally fair. The intended argument is that downloading (and parsing and executing) an entire page just for one button click to load is slow.
But if you can hydrate granularity, and prefetch smartly (based on visibility, analytics, etc) things speed up a lot.
If done right, there is no delay on interaction, and a lot less time and resources required to load a page, increasing lighthouse scores and TI specifically
Thats what we’ve seen in the field too, the FAQs in the article link to some real world examples. Tho I can’t say our prefetching is as smart yet in practice as we want, so sometimes there is a delay on very first interaction. There is a straightforward way to improve this tho that we are working on
While we are on the topic of browser event handlers and "embracing how browser actually work", I can't help but mention that the builder.io website's top navbar cannot handle ctrl + clicking (for opening links in a new tab).
It's actually quite subtle. Sometimes it works, sometimes it doesn't, depending on which page you are on, what you've already clicked, etc. All part of the fun of frontend web development, ain't it?
Oops, thanks for catching. I implemented the client side routing there from scratch with partytown and must have forgot a check to make sure the ctrl/cmd/shift keys aren’t down when canceling the event for a client side routes. Will get that fixed this week
It seems like just trading pre-loading for lag on first interaction and trading bundling stuff int as few requests as possible for many smaller requests with their respective headers.
I mean it's fine to have choice about this trade-offs but you can do it right now just by splitting your application into parts and hydrating only the part the user interacts with. Which gives you additional flexibility of automatically hydrating the part the user is most likely to use and hydrating others in the background in the periods of user inactivity.
Also this article focuses very much on event handlers, but main part of hydration is creation of dynamic structures that allow the application to re-render dynamically and efficiently, sometimes swapping out large parts of page contents that are not delivered with initial pre-rendered HTML.
If you really wanted to improve the situation one could work on introducing partial hydration on demand into React and work on ways to serialize most of internal structures of React apps like virtual dom, so they can be passed along with the pre-rendered HTML to make the remaining requests lighter.
We've been doing SSR all along, folks: server side templates.
Yeah, HTML was pretty hamstrung as a hypermedia, which made for mediocre UX, but that's been fixed by libraries like unpoly, hotwire, or, my own, htmx.
Is this condescension really necessary? I don’t think it helps the discussion generally, but particularly from authors of libraries in a similar space with a different approach. Lots of others in the space, many whose work would likely get a similar reaction, are quite welcoming to competing approaches and even speak highly of them.
That said, I think you might want to consider looking more closely at how Qwik works. It produces markup metadata that’s not dissimilar to what I see in htmx. I don’t know if it’s a direct inspiration, but that similarity seems particularly odd to dismiss so bluntly.
The major philosophical difference between the two is the authoring experience: Qwik annotates the HTML with a compiler, in htmx it appears the expectation is you write the annotations directly. Qwik’s server side templates just happen to be authored as JSX components. Both are completely valid! Probably more a matter of preference than anything.
Personally, I prefer the Qwik approach. But I welcome yours as well and encourage people who would prefer it to choose it. Both are significantly better, in many cases, for users than the current outcomes from many other frameworks which appeal to the devs Qwik is targeting. Isn’t that also welcome given the state of web dev today?
Hey, thank you for this response though! It’s much more what I like/hope to see in the ecosystem. Also I’m an old curmudgeon so I relate, and sometimes need a nudge to turn the contrarian knob down to friendly :)
htmx is the first thing that’s made me feel like I can build a useful front end again in years. Plain server side rendering of templates and a bit of sprinkles on top. It’s awesome and thank you!
Traditional SSR involved a separation between templates which contain presentation logic, and controllers for mediating business logic, persistence etc.
If your backend & frontend are in same language, or you use template engines with implementations in mutliple language like handlebars/pug/soy etc. you could easily render the same templates using JS and your client side can have as much ui state, interactivity etc. as you want.
If we adopt incremental enhancement then the fetching of templates can be delayed - we primarily need the controllers which handle dom events to make the server-rendered ui interactive. This is easily achievable through libraries like stimulus where controllers can add complex interactivity to server rendered templates and re-render them if needed through templates which are fetched on demand. We can even preserve form element states by using libraries like morphdom for swapping content.
However, what really breaks down all of the above is the concept of components as popularized by React etc. When we start writing react-style components then our rendering logic and associated behavior are tightly coupled and we need to pull in all the rendering logic for enhancing the server rendered content. React devs like to preach that traditional separation of concerns is not useful in practice and it is better to have rendering code colocated with behavior - but solutions like this just demonstrate that this separation did actually have some merit albeit at the cost of some indirection.
What solutions like Qwik are attempting to do is enabling folks to keep writing component oriented code but now we need a fancy compiler tooling that deeply integrates with the stack. The approach does have its merits but it is just one path to address the problem.
I built my startup on elixir. Its allowed us to do things in days that would take weeks in nodejs. Absolutely fantastic system overall. biggest gripe is that its hard to hire for.
You didn't address my last points but I'll still address this comment.
> this very sentence sounds absurd
Which part? "hydration", "improve the initial load times", or "PWAs"?
Let me rephrase if you are confused. It renders a snapshot of the app on the server so when you first load the web app it's rendered already. Then the client picks it up from there. It's completely optional to do this.
> how many websites out there need to work offline?
Are you asking if it's useful to have access to information and entertainment offline? For me the answer is yes.
It depends what you are making, but yes I think you should strive to make things work offline if you can.
Also websites can work offline without CSR. That's not really what this is about.
Hydration is about improving initial load times of CSR. I really don't know how to simplify this further.
Isomorphic (the same js libraries on the server & client) software is also nice. With WASM, it is possible with other programming languages, though js/ts has a large head start in the isomorphic web space.
You don’t need dependencies like webpack, react, or even any javascript at all to show static pages… and it would be a poor use of them just to write static html. But if you want interactive elements like image editors, rich text edition, fancy tables, then you start needing smarter and smarter build tools to bundle it all together and support the maintenance of interactive complex web applications.
I'd also argue that having a static build tool that generates HTML will also allow you to write more HTML and more complex HTML, as you can abstract away reusable components. So there are benefits even if you are serving non-interactive HTML pages. Just cache the React build output.
I don't use React-based static site generators, but I'd imagine if they become slow it's because it's generating a large code-base, but unless you provide some benchmarks showing compilation of React vs some other templating system I can't really take your argument against it for this use case.
The developer experience is better. I personally like type safe templates, and just the JSX bits of React are one of the most mature type safe templating frameworks with very good editor support.
I've been brainwashed by consumers – they demand and expect a certain experience on the modern web. Also, you can run Next.js on a Raspberry Pi and free Cloudflare DNS+SSL. :shrug:
The fallacy of our generation. Except they literally don't. They just don't want their computer to not run out of battery loading your resume driven developed site.
You're wrong. I used to run node + mongo on my 3B+ until it died on me one last time :(
It could manage much, much more than that. Now, I haven't tried running Next on it but unless it's worse than Nuxt (which isn't all that fast, really), it should be fine.
Is the idea to take advantage of the server's horsepower to get something on the screen fast by sending pre-rendered HTML and then wait while the browser runs the code to basically re-create the page for interactivity?
(it's been a while since i've done traditional front-end web dev)