The initial motivation for intercooler.js (which the author forked) was performance. I was working on a large bulk table update and building the table dynamically in javascript. The performance was terrible (this was back in 2012, no idea what it would be like today).
I realized that I could just deliver and slam HTML into the DOM and that the browser engine, written in C, was very fast at rendering it.
That turned into a pretty big javascript function, which then turned into intercooler, which then turned into htmx:
At a glance through the docs, I'm pretty sure I can replace about 30% of my website's javascript codebase with this. Not to mention that having to actually write the javascript to "spruce" up a form will often lead me to be lazy and just have people deal with an un-spruced-up form.
I'm itching to get off work and try it out for real :) Thank you for all the work you did on this.
I could definitely see something like intercooler boosting productivity. My only concern (possibly unfounded) is that you end up designing all of your server side endpoints specifically in an intercooler fashion. All of your endpoints must now return an html snippet, which is specific to the design of the page. So you have a lot of page specific endpoints. This is in contrast to a REST api where the api can be designed largely independently of any one use case.
If you ever need to switch away from intercooler you are going to have a large undertaking not just on the front end, but now on the back end as well.
If you don't need a rest API, all it does is make everything far more complicated, i.e. every page now has two end-points, one for the html, one for the data.
It's trivial in most web frameworks to have an endpoint respond in two ways, one with all the html including head, menus, footers, etc. if you hit it with a GET, the other just the snippet if you hit it with an Ajax request.
A REST API is basically a massive over-complication unless you actually need it for a good reason, say you're running both a web app and a mobile app from it.
I've used this technique occasionally for over a decade and personally have always found this server-side approach very simple compared to juggling REST APIs with client-side rendering when a client or an existing code base demanded it.
I've also always found the defence 'you might need to switch' to be a flimsy one. Usually when you do need to switch, everything is so different even your 'future-proof' API design needs a massive overhaul too because you made assumptions you didn't even realize you were making.
Think of all those SOAP or XML APIs that were future proof...
Fair points. A couple of notes though - intercooler needs two endpoints as well. One for the page, and another for any dynamic HTML.
I mean I switched from jQuery to knockoutjs to react all on one application and the API served all those transitions well. So I'm speaking from personal experience here. But that is anecdotal and maybe it's not typical.
One pattern that I have used with some success is to reuse end points and use metadata that comes up with intercooler or htmx requests to determine the structure of the output.
For example, if I have search functionality at
/search
and I'm implementing the active search pattern shown here:
I'll re-use the /search url for the partial search results and check the HX-Request header to determine if I want to render the entire search UI or just the search results.
If you use hx-push-url as well, you can get a search dialog that acts like an active search for the user, but also retains copy-and-paste-able URLs
Just adding some additional commentary based on your post.
I think you're talking about the difference between an "experience API" - that is, an API with the sole purpose of being support for user experiences/clients - and a "system" or "process" API, where the latter is for application or process integration between many systems. These are terms borrowed from Mulesoft, but I do like the terminology, I find it helpful for segregating concerns.
There are a lot of reasons people need separate experience APIs to power specialized UI/UX - especially with the needs of different client platforms (ex. chat bots vs phones vs desktop browser), separate from system/process APIs.
Yeah, I would recommend adopting htmx (or intercooler) incrementally, where it adds the most value. And when it doesn't "feel right" for a particular use case, don't use it there. This minimizes your commitment to the approach and lets you use the right tool for whatever UX job you have at hand.
EDIT: I should have read your comment more closely. With respect to two end points, one html and one JSON: I view the JSON and HTML end points as separate problems that both benefit from not being conflated with one another.
Your HTML end points are tuned to the particular use cases for your UX (e.g. active search) with the caching and tuning required for your specific needs.
The JSON end points need to be general and support unknown 3rd party client needs, and thus require more expressivity (e.g. GraphQL) at the cost of not being tuned for particular use cases.
I tried to get this idea across in this older blog post:
> All of your endpoints must now return an html snippet, which is specific to the design of the page. So you have a lot of page specific endpoints.
This is ok, adding an endpoint to spit out html is super simple, it's just printf statements with angle brackets.
Beside that I find endpoints need to be somewhat coupled to the UI anyway, otherwise the endpoint needs to be a superset of all possible data and all the complications that come with that.
I love seeing more work in this space! This sounds similar to the approach Basecamp has taken, with tools like stimulus.js (combined with with Rails UJS and Turbolinks). You can make a really responsive page with normal server-side HTML templates that "sprinkle" in the ajax functionality.
Tools like this feel very familiar to those of us who got started before the rise of the modern JS framework. It's kind of fun to see articles like the OP's pop up where people are rediscovering these techniques.
The problem is that HTML was never completed as a hypertext, they just kinda stopped at anchor tags and forms.
There isn't a good reason that only anchors and forms should be able to specify HTTP requests. There isn't a good reason that only clicks or form submits should be able to trigger HTTP requests. There isn't a good reason that only POST and GET should be readily available (and POST only for forms.) And there isn't a good reason you should have to replace the whole page on every HTTP request, rather than a component within it.
htmx is an attempt to complete HTML as a hypertext.
There isn't a good reason that only anchors and forms
should be able to specify HTTP requests. There isn't a
good reason that only clicks or form submits should be
able to trigger HTTP requests. There isn't a good reason
that only POST and GET should be readily available (and
POST only for forms.)
That's an excellent and thought-provoking way to think about it.
I'd always been mentally locked into HTML's basic "the web is a series of linked pages" paradigm that was in effect ever since it debuted, thinking that to do anything outside of that paradigm you'd obviously want to resort to manipulating the DOM directly with javascript.
But, there's really no reason for responsibilities to be divided in quite that manner. There's really no reason HTML itself can't encompass somewhat more robust hypertext features, with declarative support for functionality like "this link should load URI abc in the xyz region of the current page."
Frames, of course, did sort of do that natively in HTML, but that was a very clunky implementation to put it mildly.
I can think of potential arguments against what you say, but I think I agree...
Right. And not only is there no reason for responsibilities to be divided that way, as it stands you kneecap the promise of REST/HATEOAS by restricting it to the very specific cases of anchors and forms.
So with htmx I'm trying to complete the HTML hypertext and let people take advantage of the simplicity of the REST model without sacrificing user experience.
Once I got past certain mental blocks, it became fairly obvious that you can structure all your GUI scripts to be configured and to interact with one another via DOM.
The next insight was to use CSS selectors for targeting.
Then using consistent name prefixes and separating behaviors into self-contained libraries.
The stuff above was proof-of-concept. The possibilities behind this approach are mostly unexplored.
I love seeing people invest time in this area, kudos. One thing that’s a bit hard for me to justify though are the examples like “click to edit”. There is a very noticeable delay as you wait for the network request with the edit document, whereas you typically won’t see that with client-side view logic. Is this just not a problem that htmx is trying to solve?
This is really interesting. Thanks for all the effort you've put into this project.
I had one quick question - how easy/difficult would it be to integrate another JS library with intercooler or htmlx. For example, let's say a table is fetched dynamically via htmlx, how would we go about integrating a library that does client-side table sorting/filtering?
I realized that I could just deliver and slam HTML into the DOM and that the browser engine, written in C, was very fast at rendering it.
That turned into a pretty big javascript function, which then turned into intercooler, which then turned into htmx:
https://htmx.org