The stack is not te problem, the problem is people using the incorrect tools and doing over-engineering.
React and others are simply a tool to pass the logic to the client. Useful in situations like apps that can run almost completely without the server or it can manage the offline situations.
All the related tools are only optional things to improve something:
• transpilation for older browsers
• talwindcss to manage the styles -if you are a lot of people-
• redux or others to manage the application state
For other kind of applications, you can continue using the same pattern where the server has the main responsibility of generating the entire view.
In my case, I’m quite happy using htmx and avoiding frontend frameworks because usually I don’t develop offline apps.
1) Some front end needs and/or frameworks were developped by companies having a scale and pool of talent justifying the needs for new tools tailored to their use cases (extreme audience = need to address many user specificities + pool of talents means the best brains on this planet invent something: doesn't help in coming up with a trivial solution). React, Flutter, AMP.
2) During the zirp, coming up with some great tech was also a way to attract talents. Using such tech was (is) a way to be part of that group too. There is a trend effect. I am wondering whether complex front end tech stacks are justified when you can't hire the best talents and when you need to work faster to keep in business.
3) Micro service all the things leads to using API's everywhere which leads to have consumers everywhere including your front end. Coming back from this could mean using simpler apps and dropping some of the front end complexity.
The front end is where the greatest leverage to control user behavior is. Therefore, demand to add features there is also the strongest.
Recall that what got investors excited about the Web in the 90's was inline images. They didn't respond to hypertext, but they were looking for a next thing after "multimedia" and saw opportunities. Next thing you know, they were writing thought-leader articles about "push content", imagining the TV-ification of the Web.
The complexity in front-end code does not follow from that. User-oriented features are orthogonal to the issues described in the article (those are all developer-oriented), and all the described issues are not new in any way, and handled well in other environments:
no universal import system: Practically all other languages have a universal import system.
minification, uglification, and transpilation: Many other languages are not just minified, but actually compiled to machine code or at least VM bytecode, and still handle source mapping, debugging and code references in stack traces better.
different environments: This point is the only one that partially applies, because the front-end is fixed to JS due to browsers. Non-browser front-ends exist though, e.g. native apps, and have no problems sharing code through libraries.
file structure: littering the root folder with config files is annoying but hardly a real issue.
Configuration hell: Works fine in other languages (no all of them, though)
Development parity: Works fine in other languages (no all of them, though)
It is rather simple why: Frontend is not linear programming, most backend actions are single-flow (on A do B), in frontend at any point the user can interrupt things, which calls for state management.
Most backend applications are stateless and state management is outsourced to a database which does the heavy lifting. So the complexities are in scaling. Maintaining a complex frontend application is akin to maintaining a complex caching layer in front of your database.
The tooling hell doesn't help of course, but I wouldn't say it is the main reason.
There is nothing you can do in the front end that a .NET Forms application from a few years ago could not also easily handle.
And the .Net forms application would be immeasurably simpler in terms of complexity and as a bonus, would have the backend thrown in almost for free as well.
I deliberately picked .Net forms because despite being much simpler than today’s front end stacks, it was still, much like any MS product, an overengineered corporate driven MS tech.
Something like Ruby on Rails, Laravel etc shows that front end is not inherently complex.
Rendering 3d VR environments with interactive points of interest, live video players with real time highlight markings and sport standings with graphs comparing parallel games in the same league, having online conferences with 20K visitors that can interact with each other through (video) (group) chat.
So why did the frontend engineers move on from Ruby on Rails?
When I was an ROR developer, RailsCasts would tell you to do basically the same thing as HTMX, return partial HTML and use a tiny bit of JS to update the appropriate part of the DOM.
It’s a good fit for simple experiences, but breaks down when you need the result to update more than one place (say, a counter by the cart icon, or a set of options in a select in the sidebar.) Then, you hypermedia approach has forced you to try and explore your HTML snippet to pull relevant information, rather than get it in a nice structured format.
this is a common sentiment in these anti-modern-FE threads. it's a problem of imagination; i build internal tools for a large company you've definitely heard of. my org is several thousand people. we absolutely need to use react for the complex internal 2d/3d combo applications we build for debugging our next-gen products. given all the different platforms we need to support and the realtime nature of these incredibly complex tools, we _must_ build them for the web; we tried native applications and it does not scale.
many applications are much more complex than some simple forms.
Making websites into single page applications / dynamically javascript powered separated the frontend and the backend, and made the backend significantly easier. But it made the backend easier by pushing a ton of those previously backend concerns to the frontend. And then we wonder why the frontend is so complicated.
Yeah - it’s a wildly different paradigm. It’s not quite the same as Erlang/OTP, but it requires a mindset shift much like learning OTP and thinking of callbacks on an event loop working together. Users can interrupt code when they want, and aren’t obligated to take a blessed path of actions.
I will say that some of the complications he mentions are from web apps needing to compile down to a single executable, on a platform that only really supports one interpreted language. Perhaps WASM will help here, over time.
Another wrinkle - lots of web apps have a need to say, I’m going to give you one bundle of JS for this whole hostname, regardless of URL, but then have to handle getting loaded from arbitrary URLs (that may have semantic meaning for your server) anyways. Everyone gets the complexity of a URL document hierarchy, even if your web app isn’t document-based.
true, it is called routing. But to be fair most mobile apps also have routing, usually with semantics similar, but still different, from web-based routing. The reason mobiles apps have it is for deeplinking.
Old school desktop applications didn't really have deep linking, even today it is quite uncommon (with the exception to trigger some action in the app like opening a file as opposed to navigation)
This is the entire reason. HTML, JS, and CSS weren't designed for their current purpose and updating them requires slow coordination across browser developers.
Once you start using compile-to-JS and get out of the JS ecosystem mess, the developer experience suddenly feels much less complicated.
What specific compile-to-JS ecosystem do you have in mind? I don't think I've encountered one that doesn't add complexity and layers of indirection. It may be worth it, but it doesn't come for free.
The easiest approach I've ever worked with is vanilla JS. But of course, building complicated stateful apps without a view layer like React is its own complication.
We've had good success with Blazor. The abstractions haven't been leaky. I haven't used it, but supposedly Kotlin-to-JS is excellent as well (and gives you access to a stunning amount of the JVM ecosystem at the same time). People on HN rave about Elm, too.
That just explains why do we know about that complexity.
Alternatives aren't less complex (and when you need some niche features W3 supports like printers and BT you're in deps hell -- at least you're operating without node.js), but data flows more linearly from server to client.
I was thinking under the cross-platform desktop+mobile league sorry!
Obviously when you develop with OS-specific and with a form factor even an SDK so neglected as any linux GUI will be a lot more simpler, and if you face complexity there, it's an issue and not a natural consequence.
Furthermore, the classic complex JS front-end excuse is that forbes500 firms, for whom React and Angular have been made, face problems with access and speed in rolling features that probably only MMO games face, which for most of their history have been developed under Windows only.
Or - on the backend there has been constant evolution of frameworks but browsers didn't provide lot of features that IMHO should have been in place such as ways to split and import into each other JS and CSS files or having variables in CSS and such.
Because it's always morphing. Once upon a time, it was only desktop. Then it became mobile. Then the browser added "features" and websites want that "feature". Video. Animation. Accessibility. Wasm. Then where do we hosts websites? I used to be a desktop in a closet, then it became dedicated, containerized, serverless, server-side, client-side, monolith, micro-arch. Information morphs, too. Tracking, telemetry, A/B testing, search, session state, data storage, AI.
The beauty of programming general computers is the endless flexibility. It's also the biggest danger. Holding it together against entropy and adversarial compute actually runs against how people like to code: experimentally and moving on as soon as the new desired feature is online. So project management and security are both bolted on and increase complexity greatly. Specialists are also incentivized to do a lot of arcane stuff, because if it looks good and sounds complicated it bolsters the value of their skills.
Computers and computer networks were designed in high trust environment to facilitate free communication. It was pioneered by academics and military organizations, where only highly credentialed people ever touched anything. When the web went commercial in 1993 I think we started a cambrian explosion of diversity in computation. I guess it wouldn't have gone as far without two decades of zero interest money and VC backed 'growth hacking'.
This article is kind of odd in that it implies that HTML inherently satisfies the uniform interface constraint, and somehow JSON can't. JSON can provide a linked representation of the state of the application just like HTML can.
It does mention this in the section "“REST” Wins, Kinda…"
"""
Some pushed through to Level 3 by incorporating hypermedia controls in their responses, but nearly all these APIs still needed to publish documentation, indicating that the “Glory of REST” was not being achieved.
JSON taking over as the response format should have been a strong hint as well: JSON is obviously not a hypertext. You can impose hypermedia controls on top of it, but it isn’t natural.
"""
Personally, the section "The Crux of REST: The Uniform Interface & HATEOAS" says it all.
Computer-to-human interaction is vastly more complicated than computer-to-computer interaction. System complexity can be bounded by well-defined rules. UI needs to account for different screen sizes, input methods, human error, changing trends, even psychology.
Part of it is flash going away too soon before there was a viable replacement for rich us experiences and it’s taking some time to make that happen (and then some).
Things have been quite brittle and requiring constant updating, or more than seems reasonable. Many devs don’t know a better or simpler time.
There seems to be some emerging options, whether it’s the livewire type technologies, or the most recent new curves that libraries like svelte, flutter, alpinejs and more have taken, providing most of the bang with less overhead.
The real sad reality is that the web is just too archaic, it was never design with "apps" or even multimedia in mind.
We tried to move past this by going peddle to the metal with Javascript, and now we are slowly realizing that, while we can do this, there are major scaling issues.
And so now the hot new thing is things like svelte.
Rinse and repeat.
And honestly at this point I'm more and more leaning towards dart + flutter might be the future or maybe something else with strict unified standards and is design with apps and multimedia in mind (like what flash was partially).
I don't understand why so much of the static web is a mess of javascript anyway.
So many pages can be served as static content that I don't understand why they need javascript at all
Whatever happened to the interest in static site generators a few years back? It seemed like we were finally going to move away from heavy javascript just to show some text and images, and yet the javascript seems to be getting ever heavier.
> Whatever happened to the interest in static site generators a few years back?
I'm still interested... I forked the pug templating language in order to write my own tools since the existing ones didn't do what I want. Have been working on it on my free time.
Here are some reasons for it. FWIW I explored a hypothetical alternative non-web architecture that could avoid some of these issues here [1].
1. Encapsulated GUI components are a non-negotiable requirement for most projects, even apparently "static" stuff like technical manuals (e.g. search widgets), but there is/was no widely accepted standard for server side UI components. The closest was things like JSP taglets, but that is now considered legacy technology, gone in favor of React SSR. Why? Well, because components are most useful when you can instantiate and mutate them, which means they're most useful on the client, but server side tag libraries lost all the componentization when crossing the wire leaving you with a "tag soup".
2. HTML and CSS, being as they are committee driven and implemented multiple times in security sensitive C++, evolve very slowly. In practice people's ideas about how to present content, even so-called "static" content, change faster than the standards can keep up, so people fall back to JS to cross the gap. But then you get the impedance mismatch that comes from mixing several different programming technologies together (HTML, JS/TS, CSS, C++), and the DX goes to hell.
3. Why is it a "mess", well that's mostly an unarticulated social preference. Developers seem to prefer open source bazaars on the frontend, even if it means a horrific DX in which solutions have to be stitched together out of lots of tiny half-abandoned libraries. This is probably a legacy of the churn of the 2000-2010 era in which many large, well thought out proprietary frontend app frameworks ended up being abandoned by their owners or experiencing severe strategic product management errors. Delphi, VB6, Flash, Silverlight, .NET WinForms, .NET WPF, Java Swing, JavaFX, GTK, (to some extent) Cocoa and Qt and so on ... these all provided much more seamless and coherent platforms for writing apps but ended up leaving stranded user bases behind after the backing companies didn't execute properly on sandboxing/security/deployment/cross platform support.
Or to put it crudely, a part of the reason people target the web is the lack of product managers involved in defining the platform. Chrome has them in theory but in practice a lot of stuff they add to the platform is deliberately unambitious incrementalism, and the web's enormous base of valuable but unmaintained content means they aren't able to break backwards compatibility on the core tech despite having huge budgets. The downside is that anything not provided by the base platform experiences the opposite effect where the gaps get filled by enthusiastic volunteers who don't plan together or even stick around very long.
There's nothing fundamental about this choice, as the wholesale adoption of iOS and cloud tech shows. Devs will buy into fully proprietary platforms in a heartbeat if it's convenient to do so, but this is partly because those vendors have proven to be quite good about backwards compatibility and incremental development: there has never been an "AWS 2" effort and Cocoa's evolution has been quite smooth from the NeXTStep days.
One way to escape the complexity of front-end development is to write for non-HTML platforms instead. Historically this was quite painful because most work gets done on desktops, but whilst desktop development could be quite pleasant desktop distribution was extremely painful. I've spent a couple of years working on that problem and it's now way easier than it once was (check out [2]) so deployment complexity for devs is increasingly no longer a concern. If you want to write in Jetpack Compose or JavaFX or Flutter, or indeed Electron, then you can do that and the whole deploy/update story has got nice and easy. The big gap that remains is certificate cost, but we're looking at fixing that by signing for you if your app fits inside a sandbox. We're scouting around to understand demand at the moment.
One of the reasons is that users' expectations of UX rised with 1-2 orders of magnitude in the last 2 decades.
Whereas the platform (the web) and the languages (HTML, JS and CSS) do not offer a cohesive answer to those expectations. It's all just bits and pieces of improvement here and there.
Often the frameworks that aim to solve this use abstractions over these languages and APIs (e.g the DOM, routing). And these abstractions (ts, jsx, react, css-in-js, tailwind ...) are not a cohesive unit and bring back the same/more friction that's inherent to the web - 3 languages trying to play with each other) - ...
... But this time with even more "parts" and abstractions.
Web technologies weren't designed to build web apps. And since a re-design/rewrite is off the shelf, we're content with small improvements that add improvements but also increase friction.
Because using the web as an "app platform" is a hack. Manipulating HTML with JavaScript to achieve app-like functionality is a hack.
JS, along with other things, was bolted on as an afterthought, because HTML had too much momentum for people to stop and consider a proper way to have sandboxed, portable applications, which is what WASM seems to be, after all this time.
And when there's a quirky platform underneath (HTML + JS), people invent a million opinionated ways to achieve the same thing, because there's no single right way to do it. And each comes with it's own quirks.
I've been using Rust and WASM for my latest front-end project, and I think this setup is a viable alternative to commonly used JS frameworks for those willing to put in some effort to ramp up on new technology. Addressing the concerns from the article:
"No universal import system" - Rust has it's own module system and Cargo is used for managing dependencies, no need to worry about different module systems.
"Layers of minification, uglification, and transpilation." Just compile Rust to WASM file for the browser, same as using any other compile target.
"Wildly different environments." Something that you'll still need to deal with. Some runtime dependencies are system-specific (code running on the browser usually needs access to Web APIs, and JavaScript, code running on the server can't access WebAPIs but can access the system clock and filesystem. Sometimes separate libraries or separate runtime configs are needed (e.g. configurable time source)
"Overemphasis on file structure." Not a problem for imports, but you may still have file structure dependencies things like CSS, image resources etc.
"Configuration hell." Pretty much non-existent once you have your Rust compiler setup locally.
"Development parity." Just use trunk: https://trunkrs.dev/, to watch, build and serve, config is minimal.
The reliance on nodejs which lacks a standard library. It lacks a built in build process. It lacks a built in lint/format process. It lacks a built in test runner (although I believe this isn’t true anymore?). It did have a module import process, but it was so badly implemented (or maybe it lacked it in the beginning?) that despite nodeJS being the standard, most people are still using require JS.
I don’t know which, if any, nodejs alternative will succeed it, but if say Deno were to do so, the stack would be immeasurably simpler.
Right now, hopping between 2 different JS projects both of which do the exact same thing, means you may have to learn completely different build processes, completely different minting rules, completely different typescript compilers, completely different module import syntaxes/formats/configurations, completely different test runners, test description languages, etc. completely different standard libs (one may have lodash while the other is importing individual functions from npm), etc.
Heck, even your nodejs may not be nodejs but rather could be yarn, pnpm, etc
I believe nodeJS’s decision to essentially outsource all basic functionality while the JS ecosystem figured itself out was a huge reason for its success, but now that many things are more established, it’s causing a lot of unnecessary complexity.
I have a customer that uses Vue. It's much easier than React but still not easy enough. The complication as usual is the state management. There are prop local to a component and a global state. Computed values and "real" values. There are different ways to update those values some as easy as to assign to this.property and some complicated calls to functions somewhere else. Again, compared to React there is less time lost in boilerplate and puzzles, but it's still too much given that often all I want to do is equivalent to
this.parent.aList.push(item)
or
globalState.customersList.push(item)
Ideally I'd write it in JavaScript instead of using an API. I would accept
globalState(customerList, "push", item)
if calling a function of the library is the only way to trigger the UI update code.
Edit. I add an extra nuisance that could be solved by a better and more straightforward syntax
I have to work with code like this
store.js:
import * as model from './modules/model'
component.vue:
import { mapState } from 'vuex'
computed: {
...mapState('model', ['model']),
}
this.$store.dispatch('model/method', {...})
which calls "method" defined in "model" and which is very convoluted compared to what we are used to in other languages
import Model from `store/model`
Model.update(args)
If often think that framework and library authors don't try hard enough to build tools with simple interfaces. That reminds me about Erlang/Elixir handle_call/handle_cast.
Well most of it is for backward compatibility I guess so your newest frontend can still run in a browser as old as internet explorer by changing a few Babel settings.
Some of it is to optimize the code delivery so you're sending just the bare minimum source code and not wasting user's bandwidth.
Of course if you had just one newest browser then you could do away with most of it but at the end of the day you have to make sure your frontend can run everywhere including mobile devices, hence the complexity.
The amazing part is ever since vite has been on the scene a lot of it has been abstracted away. There is no need to even compile anything during dev which has been a game changer.
It's still ridiculously complicated in so many other ways, ignoring backwards compat completely.
I wish I could describe my most recent attempt at migrating our app to the new Next 13 app router for an audience, on camera, on stage. The levels of confusion and dead ends, and configuration, and error screens, and the need for truly expert-level knowledge just to get things working as one would expect made me realize there's just no way this can survive as it currently stands. It's all an abomination. React is dead. FE is dead.
Please just give me back a simple React.renderToString mounted into an express wildcard route, hooked into react router. All of these perf concerns are for the .0001% of people who even notice this shit, or need things to run so ideologically fast that they're willing to throw out every bid of common sense in service to an abstraction that is DOA as soon as you use it to do anything complicated at all, or apply it to an existing codebase.
As an interesting contrast, I work on a WPF app professionally that has been around for sure since .NET 3.5 (with references to .NET 2.0 DLL's at times) - at least 12 years - and as much as we give MS crap for abandoning WPF, I can still crank the project open in the latest Visual Studio, probably transparently upgrade to .NET 6 - and everything just works. There are a lot of advantages to web-based frontends but sometimes I think desktop apps are underrated from a stability perspective.
Windows native desktop apps are OK in some controlled (or mandated) environments. For example, 20 years ago I was working for a large company that handed only Windows XP laptops to every single employee.
But what if the customer has a mix of Macs, Windows, Linux laptops? Or if before even getting there, they think that they want a web app so they engage only companies that build web apps? A native desktop app will never happen.
By the way, that company I was working on 20 years ago, despite being Windows only had a number of web apps, including timetracking and everything else. I didn't investigate the reasons but I could guess one: distribution of updates.
Yes update speed, iteration speed and general ease of deployment is a huge part of it. Also the ability to develop on Mac/Linux but deploy to Windows.
One of the popular features of the desktop deployment tool I like to shill here sometimes is web-style "aggressive updates", which basically means synchronous update checks on every launch. If the user starts the app and there's an update available a delta is downloaded, applied and the app then launches all without any prompts or user interactions required. As long as users restart the app from time to time it means they stay fully up to date and so bugfixes can be deployed very quickly. This isn't quite as fast a deployment cycle as a multi-page web app, but is comparable to a SPA as users can leave tabs open for quite long periods.
Weirdly, AFAIK no other deployment tool has a feature like that. It's unique to Conveyor. Desktop update tech is pretty stagnant and is optimized for a world where updates are assumed to be rare and optional, so support for fast delta updates or synchronous updates are often rough/missing. When your frontend has a complex relationship with a rapidly changing backend / protocol though, that isn't good enough.
Also we made it from the start be able to build packages for every OS from your dev laptop whatever you choose to run, no native tooling required. So you've got a deployment process very similar to an HTML static site generator. Hopefully this closes the gap between web and desktop deployment enough for more people to explore non-web dev.
If memory serves correctly, I think that's close to how ClickOnce worked/works? - but Windows only. One of the apps I worked on does it, but it was a homegrown framework. Definitely the sort of thing it's nice to delegate to a specialized system where possible.
Well Java Web Start could do it also I think, but none of these systems are active anymore and of course none of them were a "native" desktop experience.
Too bad Microsoft refuse to work on proper cross platform WPF support. I've tried Avalonia UI[0], but it's just not the same. For instance the lack of a proper out-of-the-box virtualized list.
The app router is definitely not a valid alternative for my current projects on page router. I'm glad I'm old enough not to jump on shiny new things and burn myself in the process. I would actually like to take a step back from Next.js and move to something more simple. Ideally still server side rendered .TSX but with some jQuery like interactivity sprinkled on top on the client.
Vite fte. It has drastically simplified my development experience and it's crazy fast! I don't know why Next.js is considered the default for React development
The dx is poor and idk if I'm missing something but I've found navigating through a Next.js app to be super slow, I mean seconds between page loads. I get that it has its use cases, ie. rendering static content like blogs and marketing pages but I've seen so many developers that use it for user-based data applications and that's just wrong imo.
A somewhat sobering post explaining the background behind all this complexity. It is very easy to rant against complexity on its own, but respecting the background that lead to it - that requires nuance, history and a bit more words.
That said, one of the major problems is that there is no clear explanation for these boundaries in the tooling and code. Yes, you can `npm install lib` and unless it is documented in readme, it won't be obvious will this run in node only or in web too.
Because there was a basic set of primitives that could give rise to almost an infinite number of possibilities. Multiple paths and iterative forces leads this to a very diverse and thriving evolutionary landscape. It has become gloriously messy.
It would be interesting if those studying biological evolution could see how much of their techniques, theories, and predictive abilities could apply in this realm.
What I did, is I removed all linters, like eslint, prettier etc, because they did disagree about syntax. Point is to make to code minimal change, that fixes something, or adds some feature. Not that most commits would be about fixing syntax.
I also merged all branches to one main branch, because merging between branches did take a lot of time.
What if there would be no build step? DHH has some nice ideas about that:
React and others are simply a tool to pass the logic to the client. Useful in situations like apps that can run almost completely without the server or it can manage the offline situations.
All the related tools are only optional things to improve something: • transpilation for older browsers • talwindcss to manage the styles -if you are a lot of people- • redux or others to manage the application state
For other kind of applications, you can continue using the same pattern where the server has the main responsibility of generating the entire view.
In my case, I’m quite happy using htmx and avoiding frontend frameworks because usually I don’t develop offline apps.