IMHO this approach is better for most apps. Remix is bending the curve here. They make it extremely easy to progressively enhance a small part of your UI with all the power of React.
That said, using React on the server side and forcing you to run JS on the server is not my cup of tea...
The programming model is totally different when working with HTMX. The server returns HTML instead of a JSON API which is the usual solution when building apps with React/Preact and similar.
Yea htmx definitely has a different programming model than an SPA style framework like React/Preact. I actually like the JSON api programming model where the backend is as simple as possible, and the frontend js/ts code is completely responsible for handling the UI, but I guess it's a matter of personal preference.
The linked article is referring to a team switching from React to htmx though, so for them I'd imagine it would've been a much easier transition if they'd just added a Webpack alias[1] that replaced React with preact/compat, rather than switching to a completely different UI paradigm.
The difference in programming model IS THE advantage of HTMX. Changing from react to preact is a perf optimization.
If you are doing the SPA thing correctly you must duplicate business logic (data validation). Also when your API just sends "dumb data" (JSON) to the client, you are forced to make changes to the backend and frontend in lockstep.
According to the article we're commenting on, the programming model is not the sole reason for switching from React to Htmx. Look at the executive summary's bullet points. Four of them are related to performance (build time, time to interactive, data set size, and memory usage). Performance might not be the reason you personally use Htmx, but it's certainly put forward as an advantage in the article which we're commenting on (which is from the htmx team by the way).
I've often seen this meme repeated in debates that js frameworks require you to duplicate logic between the server and client, but most of the logic isn't being duplicated between the client and server, it's being moved from the server to the client. It's relocation, not duplication. Instead of your rails/django controllers deciding what html to render, that decision happens on the client. In this model your server is mostly just an authorization layer, and an interface between the user and the database. Hasura, Firebase, Firestore, Mongodb Realm, and several other products have been successfully built around this premise. You might not like the thick-client thin-server model, which is completely fine, but it's a somewhat subjective preference. The only objective criteria you might use to decide which approach to use is performance.
The article we're commenting on dedicates an entire section to talking about the dev team makeup and how it completely changed and unified the team approach to being fullstack. That's how thoroughly htmx changed the programming model.
You haven't disagreed with anything I said. My original comment says "Htmx is great for developers who need client side interactivity, but would rather not write any js". I'm sure a team of Python developers is enjoying not writing Javascript. My point is that you don't have to switch to a completely different paradigm to reap the performance benefits that the article lists.
"We are fond of talking about the HOWL stack: Hypermedia On Whatever you'd Like. The idea is that, by returning to a (more powerful) Hypermedia Architecture, you can use whatever backend language you'd like: python, lisp, haskell, go, java, c#, whatever. Even javascript, if you like.
Since you are using hypermedia & HTML for your server interactions, you don't feel that pressure to adopt javascript on the backend that a huge javascript front end produces. You can still use javascript, of course, (perhaps in the form of alpine.js) but you use it in the manner it was originally intended: as a light, front end scripting language for enhancing your application. Or, if you are brave, perhaps you can try hyperscript for these needs.
This is a world we would prefer to live in: many programming language options, each with their own strengths, technical cultures and thriving communities, all able to participate in the web development world through the magic of more powerful hypermedia, rather than a monolith of SPAs-talking-to-Node-in-JSON.
People certainly unify their stack on JS that way. For Contexte their team was one React/JS dev, two Django/python, and one full stack. So they had two and a half people writing python and one and a half writing Javascript before, then three python developers after.
Switching to JS would require a full backend rewrite. The backend devs may not do well with the switch, so they may have let go two developers and needed to hire a new JS dev. The presenter may have been let go with that approach, clearly not ideal for the person driving the project.
You'd also need to address the perf issues. htmx clearly sped things up. Is the alternative JS SSR?
Validating data on the client and not doing it again on the server will lead to security vulnerabilities. At the very least you will end up duplicating that validation code.
If I were using htmx I would still want to validate data on the client and the server. If you don't validate data on the client, then you're effectively allowing clients to DDOS your server with invalid data.
Client side validation does not protect against that. You can't prevent people from making requests to your server without your client, i.e. `curl https://example.com/api --data 'dkfjrgoegvjergv'`.
Well yeah obviously you can bypass the client code and directly connect to a server. That's not my point.
Client side validation doesn't prevent a malicious user from sending invalid requests, but it can prevent legitimate users from sending invalid data to your server accidentally. In fact, if I see validation failures showing up in my server logs for something I know should have been filtered out via client side validation, I can mark that ip address as being potentially malicious and rate-limit their future requests.
And as a user I would rather find out about validation issues immediately instead of waiting for a network round trip to the server. If I'm typing in a password for example and it doesn't meet the website's length/complexity requirements, I'd rather know as I'm typing instead of waiting for an HTTP request to complete. That extra HTTP request is wasting the user's bandwidth and the server's resources.
SPAs were a workaround to slow CPU servers serving millions of requests in the mind-2000s. Client computers were faster, so it made sense to push UX logic there.
We've flipped things around. Servers are fast as hell for rendering HTMl. We can leverage that and re-focus the client on UI code that only it can do.
IMHO something slightly more complex than this example (more client side stat) will require using something like React/Vue/Svelte. But there's no need to rewrite the whole FE. Just create a component that performs the rich client side interaction and use HTMX for the rest of your app. Win-Win.
I've been using this for years. Is really good and the best feature is that diagrams don't look like crap by default, which is not the case for most code to diagram tools out there. Looking at your Plant UML.
Ember's templates allows the framework to determine which portions of the DOM will never change, and so only needing to analyze the portions that might.
<div class="container"> <-- this won't change
<h1>Hello World</h1> <-- this won't change
<div>{{name}} <-- this might change </div>
</div>
Wow..Thanks a lot..Now I understand the sentence..
But I think now this may force us to use more handlebars. Manipulations in didInsertElement may get affected as well. Like updating classes which I sometimes prefer doing in hooks like click, didInsertElement.
Additionally, rather then having a "virtualDOM" we build a tree of the dynamic data. This is more or less diff'ed similarly to how the virtualDOM is diff'ed.
But where it get interesting is when it comes to actual DOM interaction. To create DOM, we use document fragments + cloneNodes, but for granular updates we utilize property/attribute/textContent updates. When used correctly, this combination turns out to be very fast.
As a bonus, we are typically able to utilize the browsers built-in XSS and sanitization (or just lack of parsing) rather then having to implement this slowly in JavaScript ourselves.
Ultimately, I am extremely happy with how the various front-end frameworks keep pushing the envelope. Getting faster, easier to use, and more secure. Ultimately regardless of the framework the ecosystem moving forward benefits the end users the most.
Thanks Stef. This explains a lot. Its great that Ember responded with best way after many started comparing with the performance lag of Ember. I was lil' skeptical when you declared this in December, but now I am looking fwd for the release.
This is irrelevant to the changes from the PR. Manual DOM updates are not managed by HTMLBars anyway, regardless of the rendering algorithm.
That said, I think that binding classes (like `class={{foo}}`) and updating it through HTMLBars is a safer way to do it, comparing to direct DOM manipulation.