SPAs make sense when your users have long sessions in your app. When it is worth the pain to load a large bundle in exchange for having really small network requests after the load.
Smooth transitions are a nice side effect, but not the reason for an SPA. The core argument of the article, that client-side routing is a solution for page transitions, is a complete misunderstanding of what problems SPAs solve. So absolutely, if you shared that misunderstanding of SPAs and used them to solve the wrong problem, this article is 100% correct.
But SPAs came about in the days of jQuery, not React. You'd have a complex app, and load up a giant pile of jQuery spaghetti, which would then treat each div of your app is its own little mini-app, with lots of small network requests keeping everything in sync. It solved a real problem, of not wanting to reload all that code every time a user on an old browser, with a slow connection, changed some data. jQuery made it feasible to do SPAs instead.
Later, React and other frameworks made it less spaghetti-like. And it really took off. Often, for sketchy reasons. But the strongest argument for SPAs remains using them as a solution to provide a single-load of a large code bundle, that can be cached, to provide minimal network traffic subsequent to the load when the expected session time of a user is long enough to be worth the trouble of the complexity of an SPA.
This article is full of misrepresentations and lazy takes. The author has had other anti-JS polemics widely upvoted on HN, which were just as carelessly written. But people upvote it anyway.
What is the cause of this?
1. Bad experiences with JavaScript apps that have aggregated complexity (be it essential or incidental complexity)?
2. Non-JS developers mystified and irritated at a bunch of practices they've never really internalised?
3. The undercurrent of "frontend is not real programming" prejudice that existed long before React etc al. and will continue to exist long after it?
I find myself agreeing with the article (although I also agree that it assumes you've chosen an SPA when you shouldn't have). To add my own perspective:
I work on an app, the front-end of which essentially consists of 6 nav tabs, 3 of which show an index of records with corresponding add/edit forms. We don't have any hyper-fancy interactive components that would require heavy JS libraries. And yet... we develop in React.
Yesterday, I needed to add 1 new field (representing by a checkbox input) to both our app and a corresponding back-end application we have, which uses Rails views.
I checked the git logs after to see how long each took. The Rails view took me literally 2 minutes to update (add field to model, add field to controller, add input to HAML with a class on the parent div). The React view took me 52 minutes, plus I later found out that I'd forgotten a damn type on some interface that's a shallow copy of our model.
Is this a problem with React itself? Not really. But it's a problem in the way that it's used, and our 6 nav tabs and 3 forms don't need all the overhead of React. So for people in a similar situation, this article really rings true.
> The React view took me 52 minutes, plus I later found out that I'd forgotten a damn type on some interface that's a shallow copy of our model.
This sounds like bad architecture, nothing about React would necessitate this. And if your typechecker isn't catching missing types, then it sounds like your types aren't adding much value.
IMHO it allows just the right amount of encapsulation and structure, whilst building on W3C web components and being highly interoperable with other javascript libraries/vanilla code. It's pretty much what web components should be out of the box.
People absolutely use technologies poorly and make things harder than they should be.
At the same time, the part of the program that interfaces with the physical world (e.g., people) is always going to be far more complex and thus harder than the bits that get to live entirely inside the computer world.
I'm sorry but that just sounds like a tooling and skill issue? Adding a new form field takes me a grand total of 5 minutes in the Vue app I maintain at work, and I'm a mostly backend focused fullstacker. We have the form/input components, adding a new field to a type is trivial (and we generate those automatically based on our BE schema anyway), error handling is handled by our form/input components.
Also, I'm not the biggest fan of React and think there are options that are a million times better like Vue & Svelte, but React is not heavy. Yes, JS frontends pull in a lot of libraries, but React (or any other framework except for probably Angular and Ember) is far from being the biggest or heaviest dependency in any project. In fact, especially React does this amazingly, React itself has 0 external dependencies. Usually people will also want react-dom, so for the simplest possible application, that's a grand total of 2 very lightweight dependencies, and for usecases like yours of a few forms, that's literally all you'd need.
Sure, maybe your 3 forms don't need React, but it sounds like you're actively adding stuff to it (and even caught a type issue from the sound of it). The non React version of this work would've entailed targeting a querySelector's inner `.value` key, and then having to parse it if it's not a simple string and safe guard against the element not being there, or targeting a shifting ID or class or all the other numerous and tedious problems that arise. Or, you stick a `v-model` on it, send it as JSON and call it a day, I categorically refuse to pretend the old universe pre-SPA frameworks was in any way a good way of working.
Just because you are comfortable with one technology and inexperienced or unfamiliar with another does not make one better than the other. How much rails have you written in your career? How much react?
I don't know that there's "real" programming, that seems like a hard fight to fight on either side, it's like arguing about whether animals are conscious or something. Are people? Who knows, pass the blunt.
But there's been this really sharp over-correction to where now an obvious thing that is just common knowledge and that was never taboo is now considered impolite to even allude to. Frontend programming is among the easier kinds of software work as measured by the number of people who can do it? I bet if I tried really hard, I could probably be pretty kickass at pickleball, small court, not a lot moving around. But to be like, that's the same thing as the NFL. No, no it's not. I would never have been able to try out for the NFL, not if I live a thousand times.
There's pickleball and pro football in programming too.
> as measured by the number of people who can do it
That’s a poor measure. It proves only that there’s a demand for such programming. I have programmed in tens of languages professionally and many more for fun.
Programming is programming. I haven’t found much difference in difficulty between any of the stacks I’ve worked with. Except, maybe C++, but that’s just C++ being garbage. I now happily use Zig as an alternative, and it’s no more difficult— easier in fact— than building a well-architected, complex UI application of any kind.
Front-end programming is easier in the sense that you can make little mistakes and your entire app doesn't fall down. As someone who's done decades of both, there's nothing conceptually easier about well-executed front-end programming over back-end. The stakes just aren't as high.
I think the argument might be that it takes less domain knowledge of hardware and all its abstractions, which does require a minimum threshold of reasoning and abstract thinking ability. I have high confidence someone who could built a database or kernel could also do front end work with a reasonable ramp up time.
I don’t share that confidence for the inverse in the nominal case
I have seen many backend developers with this mindset and approach, and;
1) Tricky parts of frontend are afaict equally tricky as building a DB/kernel/whatever.
2) A typical mistake is that a lack of knowledge about the hard parts of frontend makes backend'ers assume frontend is easy, while in reality it's their ignorance (and arrogance) rather than the subject being the issue
3) As with backend, most developers don't deal with the harder parts. Most backend developers I've talked to do simple CRUDing + minor business logic from a DB. Similarly very few developers try to write their own drag and drop library from scratch.
It's sad that so many seems to fall into the trap of 2).
(I've done both types of development for 20+ years)
I have no idea what backend developer means to this or that person. It seems to mean "not frontend", so like, directly interacting with a database and possibly using a compiled or even unmanaged language? But still often deploying through something that looks like:
Haswell <- Borg Hypervisor <- Borg Pod <- KVM Hypervisor <- QEMU guest <- docker-compose <- docker <- golang
?
I'm talking about hackers. I remember being like 24 and and a colleague of mine (legend) had never worked in JavaScript or really the web before was on our pod that got tasked with writing a browser for J2ME and BREW that implemented real web pages.
He goes home that weekend, and he comes back on Monday with a stack machine written in JavaScript (ECMA-262, we ran it on Rhino back then because Spidermonkey was a whole thing) that executed a very cute subset of JavaScript, including lambda closure and therefore Church encoding / untyped System 1. I was like whoa, why in JS? "If I have to implement it in a month, I'm already two months late to start learning it."
Is that guy a frontend developer? Backend? Full Stack? He had worked on DSPs and audio before, and on video codecs and embedded.
My comment above about some of this stuff is harder isn't a diss to anyone, it doesn't make me a millimeter shorter that Carmack is so tall, walking around in some rarefied air of genius I can't even formulate a picture of: it inspires me to work harder, try more ambitious things, push every day a little past yesterday's limit, and it has for more than 30 years now.
There's nothing wrong with programming be a job, it's a perfectly reasonable life choice and a very sensible one in light of life's other demands and opportunities. But some of us fucking love it, think about it all the time, live to be good at it. That's a different set of outcomes. And it does grate a bit to have everyone pushing this "it's all the same, we're all the same, it's one equivalent thing", that's my passion you're talking about, I take great pride in my life's singular ambition and pursuit. We're equal but we're not the same.
The backend has plenty of complexities, but frontend developers have to deal with something just as complex - the user.
Given ramp up time, most backend engineers could build a bad frontend, or build a good one if they have a really good UX team that thought through everything and are just implementing their work.
In the real world though where UX is understaffed and often focused on the wrong problems - I've had to rescue too many frontends built by backend focused teams to share your confidence.
It's also that you're at the top of the stack. If your stuff breaks, there's no layer above you whose stuff also breaks.
Well except the end user, but depending on the app they can often be low-priority (internal apps, apps with captive audiences like online banking or airline websites, etc.).
Most frontend development shouldn't be necessary as they're writing repetitive code that implements features that are missing from browsers. And it shouldn't be that hard to add those missing features to browsers. They're like half javascript at this point...
I started my career in FE and still consider myself a FE dev despite technically being a full stack dev.
Sometimes I'd be working with my team on something and they'd be like "why is this needed?" and I'm like "because javascript" or "because react."
While I agree with your sentiment that FE dev is certainly not simple, JS and front end architecture ad a whole does have its faults. That's why highly skilled FE devs who can build scalable, beautiful FE apps (whether using SPA or SSR) can be highly paid.
Note those are not mutually exclusive. It's entirely coherent to believe you find a tool hard to use for reasons relating to the tool itself, and that the task you're trying to accomplish is also difficult independently of that.
Analogy: imagine trying to give a good presentation with a horrible text-to-speech (or translation) system. Just because good presentations are hard that doesn't mean you don't get to complain about the program being terrible.
Yes, but now you're getting into "there are two types of languages, the ones people complain about and the ones nobody uses".
There are obviously flaws and issues and annoyances in js world, but a lot of those come from having to solve much harder problems (compared to say, do a sql query and turn the results into json).
I'll bite. In essence a user interface just presents the data it got from the server in some nice looking shape, and sends any edits and button presses back. Should be simple, right?
Think about the presenting data part of it. Perhaps you have a table of data, prices of tickets or some such. You could literally just wrap each item in a <td> tag and each row in a <tr> tag and that would indeed be easy.
But people generally want and expect more. How about color coding the rows to make them easier to scan left to right or sorting the table by each column or paginating through more data than can easily displayed on one screen or maybe you're doing "infinite scroll" instead.
None of these things are impossible of course, and people have done them so many times and in so many different ways that there are dozens of libraries you can use and hundreds of tutorials, but even so, compare that to the "select from table turn to json" equivalent.
The SQL can be more than a bit tricky, but aside from that, JSON is extremely well defined and specified, even without a library you can just read the specs and do it. Wrapping it in a HTTP response and returning it and so on is likewise very well specified and if you can read, you can follow the instructions on how to do it.
Creating a UI that works the way a user wants to is the opposite of all that.
Of course, one of the major differences here is that at any point you can just stop improving the UI. Maybe you stop after wrapping it in the table html. The UI will certainly work, for a very specific definition of work. JSON is considerably more boolean. It either is a valid JSON document or it isn't. You can ask a computer to check for you. You can't ask a computer to check if your users enjoy using your table.
Are we aligned with the claims in the article that a SPA architecture is not a suitable baseline architecture for content-focused sites with shallow sessions (low number of interactions)?
I don’t know, I think reducing bundle sizes and bloated javascript in favor of built in support for view transitions warranted a blog post against one of the primary arguments for why SPA.
I agree with the author. I love React. I shouldn’t need two dozen more dependencies. We ditched server side for client side in 2010 for speed. Now that we have 200x more compute power, more powerful CSS, more robust html (with web components), we can go back to server side rendering and only send what’s required for the user at the time of action.
You miss the whole point and the author is correct about this:
Modern CSS is powerful, and HTML is the way to deliver web content.
Every web framework is _literally_ a hack to make the web something it isn’t. People who haven’t seen the evolution often pick up the bad habits as best practice.
For thise JavaScript people, I recommend trying Laravel or Ruby/Rails. And then once you realize that JavaScript sucks you’ll minimize it.
Laravel is fine. It's not amazing. Like most "Modern PHP" it exhibits a Java fetish and when used carelessly can degrade into an upside-down impression of Enterprise J2EE patterns (but with an almost non-existent type system).
What I find interesting though is the assumption that web dev is done by "JavaScript people", that even the best "JavaScript people" have no technical breadth, and therefore fester in a cesspool of bad ideas, unlike your median backend dev who swims in the azure lakes of programming wisdom.
Now, I've done plenty of SPAs, but I've also done plenty of other things (distributed systems, WebVR apps, metaprogramming tools, DSLs, CLIs, STB software, mobile apps, a smidgeon of embedded and hobbyist PSOne games). Which gives me the benefit of a generalist perspective.
One thing I have observed is that in each silo, there are certain programmers who assume they are the only sane group in the industry. They aren't interested in what problems the other siloes are solving and they assume any solution "over there" they don't understand is bad / decadent / crazy.
These groups all regard each other with contempt, even though they are largely addressing similar fundamental issues in different guises.
It's a social dynamic as much as any technical one.
It’s connected with the questions around occupational licensing of programmers, unions, and similar structures which would not be so much about getting paid more but about getting quality up, squashing bullshit, and getting our quality of life up.
Without a cohesive community, mutual respect, and recognition of a shared body of knowledge, we don’t have the solidarity to make it happen.
As for Laravel, I’d say people were making complex applications (Ebay, Amazon, Yahoo) in 1999 —- Google Maps were better than Mapquest, which drew each image with a cgi-bin, but many SPA applications are form handling applications that could have been implemented with the IBM 360 and 3270 terminal.
The OG web was missing a few things. Forms were usually written on one HTML page and received by a separate cgi-script. To redraw the form in case of errors you need one script that draws the form and draws the response and a router that choose which one to draw. You need two handfuls of helper functions, for instance
to make forms which can be populated based on what’s already in your database. People never wrote down “the 15 helper functions” because they were too busy writing frameworks like Laravel and Ruby-on-Rails that did 20x more than you really needed. So the knowledge to build the form applications we were building in 1999 is lost like the knowledge of how the pyramids were built.
As for performance, web sites today really are bloated, it’s not just the ads and trackers, it’s shocking how big the <head/> of documents get when you are including three copies of the metadata. If you are just drawing a form and nothing else, it’s amazing how fast you can redraw the whole page —- if you are able to delete the junk, old school apps can be as fast as desktop apps on the LAN and still be snappy over mobile.
> The OG web was missing a few things. Forms were usually written on one HTML page and received by a separate cgi-script. To redraw the form in case of errors you need one script that draws the form and draws the response and a router that choose which one to draw
Yes, I was there, I wrote and used these pages. It sucked. Things are better now.
> So the knowledge to build the form applications we were building in 1999 is lost like the knowledge of how the pyramids were built.
Building a form with zero user affordances is not difficult. It just isn't good.
We absolutely know how the pyramids were built. You get a whole bunch of humans to move a very large amount of stone and then stack it up in a big pile. The reason nobody builds pyramids today is because we have better alternatives.
I've been in a happy place with React with some projects.
I've worked on some where it was valid choice but boy the annoyances, like not being able to test your components because you're on an old version of React where your test framework can never know when the last render happened because you depend on a large number of components which don't believe they'd run on the current react but probably would if you could just vendorize 30 or so packages and patch their package.json files.
Or depending on a test framework that refuses to look up components by class, id or selector because they want to force you to use aria, even when it doesn't make sense such as in three.js.
Or depending on a routing framework that doesn't get maintained, instead they've been through 7 incompatible versions of it which leaves me thinking that they didn't ever believe in what they were doing.
Or having to understand the internals of 5 CSS frameworks (including JS-heavy frameworks like Emotion) to handle all the components you've sucked in and still understand raw CSS through-and-through to fill the gaps.
I worked on one system which was frankly experimental at a startup that was doing two week sprints building a tool called "Themis" for building ML training sets. The thing was that we were always having to add new tasks and between Docker and an overengineered back end and front end it took 20 minutes to go from source code to being able to interact with the thing, so it took a 20 person team and lots of coordination between FE and BE engineers to do the simplest things.
I sketched out an alternative called "Nemesis" which grew into an HTMX-based system where it takes one programmer an hour to code a task and there is no build, and between Flask and the 15-or-so helpers it is easy. I've hacked it to be an RSS reader, an image sorter, and several other centaurs:
this feels like a desktop app when I am on the LAN and loads well under a second on a mobile hotspot the first time and every time. The key is that the tasks are nearly completely independent of each other, there are no builds and no dependencies, so writing a new task can't break old tasks. That system has plenty of screens that use d3.js for awesome visualizations and if I wanted to make a task that did something complicated which would really deserve React or Svelte or something I could do it, again, without breaking the the other tasks.
> I sketched out an alternative called "Nemesis" which grew into an HTMX-based system where it takes one programmer an hour to code a task and there is no build, and between Flask and the 15-or-so helpers it is easy
My argument here is basically creating that is really, really hard, and there's no framework or library that will make it easy.
React and friends are a mess because they're dealing with hard problems, kinda like systemd or kubernetes.
Writing code that lives entirely inside the machine is a lot easier than having to interface with the messy real world.
> Laravel is fine. It's not amazing. Like most "Modern PHP" it exhibits a Java fetish and when used carelessly can degrade into an upside-down impression of Enterprise J2EE patterns (but with an almost non-existent type system).
As opposed to Javascript, the pinnacle of tight standards and high quality performative code...
The same arguments that can be made against PHP/Laravel absolutely apply to Javascript equaly, if not more so due to the pretty well know shiny object syndrome issues that many JS devs get caught up in.
> One thing I have observed is that in each silo, there are certain programmers who assume they are the only sane group in the industry. They aren't interested in what problems the other siloes are solving and they assume any solution "over there" they don't understand is bad / decadent / crazy
A lot of coders have realized there's social credit in trashing other programmers. I regularly see comments from people claiming to have a singular custodial spirit for their craft, unrivaled by their peers. There's obviously advantages to telling people that everyone else is holding it wrong.
Then their advice is
- Some niche trick that's completely irrelevant to most situations
- Something that sounds obvious and uncontroversial, but is so broad and vague that it's not even worth arguing about.
And I realize the irony of me making this comment. But I wish we'd get over this phase where everyone thinks they're an expert, and that every bad codebase or slow app was the result of someone who didn't care. I have yet to work with someone who didn't care about their code.
> But I wish we'd get over this phase where everyone thinks they're an expert, and that every bad codebase or slow app was the result of someone who didn't care.
I can only speak to my own experience, but it’s been this way for the entire 18 years I’ve been doing this professionally and I can’t see it stopping anytime soon.
I just ignore it and write off people who openly think that way. It shows a lack of experience and empathy.
You could maybe say "every framework is a hack to workaround protocols primarily designed in the 90's before we really understood the full application of the web"
I would go as far as saying that the genius of the web is that it can grow, develop, be hacked, modified, expanded through technical and institutional means to be many things it wasn’t originally envisioned to be. Why is that a bad thing? Why is originalism a good thing?
The internet isn't the web, and there are plenty of applications who use those protocols just fine outside of the web browser. It's the html tech stack initially made for hypertext documents and resources that's been heavily upgraded to do web-based applications.
Are you aware that nowadays you can write SPAs in dozens of languages?
It's an entirely different concept. It's certainly not the right technology for a news site, but days ago in a different place, there was for example the discussion about how an SPA and minimalistic API services fit a lot better with your average embedded device.
Even with Laravel you will write lots of Javascript unless you go for blade templates or that other templating thing. Javascript is also great for making the web interactive. Maybe the sheer amount of SPAs out there shows us what we really want from the web. Most things ppl use in their day to day life cant be built with HTML and CSS only
What 'other' templating thing? I'm assuming you're likely talking about either Intertia or Livewire. Inertia's more geared towards SPAs than Livewire though. Most Laravel devs tend to use JS sparingly - not everything needs JS.
Theres next to no reason why the vast majority of sites on the web would ever need to be heavily reliant on JS. Rendered HTML/CSS with JS being used sparingly for page functionality is a far better user experience. I'll never understand the obsession with JS for the sake of JS.
> You miss the whole point and the author is correct about this:
your comment is funny, because you are so wrong you aren't even aware how and why you are wrong. It's in "not even wrong" territory. I'll explain you why.
> Modern CSS is powerful, and HTML is the way to deliver web content.
Irrelevant. That's not why the world uses JavaScript frameworks. It seems you aren't even aware of the most basic reasons why the world migrated to SPAs. The most basic reasons are performance (and perceived performance), not only because the require less data to be moved around the internet but also every flow doesn't require full page reloads.
Also, classic old timey server-side rendered WebApps end up being far more complex to develop and maintain as you mix everything together and you are unable to have separation of concerns regarding how your platform is deployed and ran and how your frontend works. SPAs even allow your frontend team to go to the extent of handing your backend as if it was a third-party service, and some SPAs are even just that. There are plenty of CMSs out there which eliminate the need for a backend by providing all content needs through their commercial APIs. This makes webapppp projects greatly cheaper and simpler to finance and support as you can simply bother about developing and maintaining your SPA.
Lastly, those JavaScript frameworks you're trying to criticize also use CSS and HTML, by the way. So as you may understand your point is moot.
> Every web framework is _literally_ a hack to make the web something it isn’t.
You are clearly talking about things you have no understanding over. It matters nothing if you specify a DOM with a static file or procedurally. Updating only specific elements or branches of a DOM is a very basic usecases. In fact if you had any frontend experience at all you'd be aware that all mainstream GUI frameworks, including desktop, represent their UIs with a DOM.
So here you are trying to argue that frontend development is not frontend development just because you have an irrational axe to grind regarding a specific class of web development technologies?
> For thise JavaScript people, I recommend trying Laravel or Ruby/Rails.
If you hadn't already proven you are completely ignorant and detached from reality, this argument alone would leave no room for doubt.
> all mainstream GUI frameworks, including desktop, represent their UIs with a DOM.
If you call every hierarchy of visual items with some kind of layout manager(s) a DOM, then yes. Notably, the D doesn't really apply because GUIs aren't documents, and that's exactly why HTML is kind of awkward for GUI programming: it was initially designed for documents.
Edit: Sibling comment makes the good point that the main difference is that GUIs have mutable state while documents don't. I would add that GUIs also have controls to change that mutable state, which is a more superficial difference, but well, web-based GUIs are still extremely varied in their interaction styles, which not necessarily good.
Not the person you're replying to. I agree with a lot of what you are saying but:
> The most basic reasons are performance (and perceived performance), not only because the require less data to be moved around the internet but also every flow doesn't require full page reloads.
I can't keep a straight face at this one. If there's one thing the web isn't anymore, it's "fast." Presumably what you are getting at is server performance, because it's pushing all the work to the client.
> > Every web framework is _literally_ a hack to make the web something it isn’t.
> You are clearly talking about things you have no understanding over. It matters nothing if you specify a DOM with a static file or procedural
Don't be so dismissive. This is like the old anecdote about one fish saying to the other, "how's the water." The "something it isn't" is stateful. The web was not designed to be stateful, and every web framework is indeed a hack to work around that.
Thanks for dissecting all this nonsense. Although in all honesty, we may as well be dealing with comedy here: green account, recommending PHP and Rails in 2025 as if that's supposed to solve anything... we're being trolled.
> ...if you shared that misunderstanding of SPAs and used them to solve the wrong problem, this article is 100% correct.
Agreed. The article was a frustrating read. The author is an SEO consultant. SEO consultants likely have a heavy focus on marketing websites. Actual apps and not marketing websites do benefit significantly from SPA. Imagine building Google Maps without SPA. You can animate page transitions all you want, the experience will suck!
I agree with you. The author's point is that browsers have finally understood why some traditional sites were created as SPAs, which involves recreating some of the functionality browsers already offer today. But that doesn’t mean all SPAs should turn into MPAs now.
IMO it will be hard for some traditional sites to adapt to the new browser capabilities, since we've built an entire ecosystem around SPAs. The author's advice should've been: use the browser's built-in capabilities instead of client-side libraries whenever possible.
Also, keep in mind he's sharing his own experience, which might be different from ours. I've used some great SPAs and some terrible ones. The bad ones usually come down to inexperience from developers and hiring managers who don't understand performance, don't measure it, don't handle errors properly, and ignore edge cases.
Some devs build traditional sites as SPAs and leave behind a horrible UX and tech debt the size of Mount Everest. If you don't know much about software architecture, you're more likely to make mistakes, no matter what language or framework you're using.
I realised years ago there's no "better" language, framework, platform, or architecture, just different sets of problems. That's why developers spend so much time debating implementation details instead of focusing on the actual problems or ideas. And that's fine, debates can be useful as long as we don't lose sight of what we're trying to solve and why.
For example: Amazon's developers went with an MPA. Airbnb started as an MPA but now uses a hybrid approach. Google Maps was built as an SPA, while the team behind Search went with an MPA.
Even for basic sites, I tend to reach for an SPA because I never know when I'll be adding dynamic features, anything from basic showing / hiding of content, list / configuration-based rendering, or requesting data.
It's usually inevitable, so it's easier to scaffold a Vite template and get cracking without any additional setup. The time-to-deploy is fast using something like Netlify or Vercel, and then I have the peace of mind knowing I can add additional routes or features in a consistent, predictable way that fits the framework's patterns.
I'd hate to develop an MPA and realize after the fact that now I need complex, shared data across routes and for the UX to be less disrupted by page loads. Once you've dug that hole, you either have to live with it, or face a painful rewrite later.
The exception I often see is targeting mobile devices in low-bandwidth areas where larger application bundles take longer to load, but I have to wonder how often this is the target audience. I live in a place where mobile data speeds are horrible, and access to WiFi is abundant, but even so, I rarely have a situation where I *need* to load up a non-critical site on the go — and having apps pre-installed doesn't really help either when the data they're requesting is barely trickling in on mobile.
So this oft-used exception doesn't really make sense to me unless I learned why this is so critical for some people.
But this is exactly the point of the article. That'd be like if I needed to build a car, I started with a rocket blueprint in case I also need to go to space in the future.
The cost of speculating a highly interactive site that users will want to interact with is a suboptimal baseline experience until you get there. Provided you get there without losing users, it'll still be a subset of your audience that use your thing in a way that truly offsets all of these costs. If that works for your product/business, then awesome! But if these costs are too high and will hurt your product, then you need to be a lot more deliberate about how you engineer it.
I typically approach the problem first and see what tools work best to solve the problem, rather than work backwards from my preferred set of tools.
> Once you've dug that hole, you either have to live with it, or face a painful rewrite later.
Based on the tools you mentioned familiarity with, wouldn't something like Astro be a happy middle ground? You start as an MPA by default and only add complexity to parts (entire routes or parts of a page) of the application that require it? Also, this hole goes both ways. If you've built your site as a SPA, and you realise that your product just needs to be HTML to stay competitive, it's a painful road to unpick the layers of abstraction you've bought if you don't wanna rewrite it.
> dynamic features, anything from basic showing / hiding of content, list / configuration-based rendering, or requesting data
What would a highly dynamic feature be in your opinion and how does a SPA framework help you? All of the examples mentioned here, in my opinion, are fantastic candidates for server side templating and progressive enhancement. I don't need see the need for the SPA architecture here.
> mobile devices in low-bandwidth areas
Curious to know what kind of devices are present in your area? The implications of larger applications is both network and CPU [1], so if you live in a relatively wealthy area (say over 75% of users had iPhones) you'd notice the negative effects of too much JS less. If you're in the public sector or building for the public, then you can't get away with the excuse that people on slow devices and networks aren't your target audience; you need to meet everyone where they're at. A HTML-first architecture is a better and more inclusive baseline for all.
My biggest gripe with Google Maps is when I pan to a specific area, hundreds, or even thousands of miles from my current location, then search for something like “restaurants” and it pans me back home and searches there, so I then have to go find my distant location again and click the search here button.
If this is an intentional choice, all in an effort to show me more local ads… ugh. I really hope this isn’t the case.
Google Maps makes it really hard to explore your travel destination, or I am holding it wrong. That and the lack of weather integration are my main issues with it.
The author states that MPA are not a solution for everything. Sure google maps don’t fit the MPA model but I’ve seen a lot of projects that would be much better using current browser features instead of react.
Google maps is a mess these days. Its glitchy, slow, has broken navigation, and is overloaded with unpredictable dynamic content. It's even worse in the native android app. Id totally go for the original version but with the more recent vector maps.
I'll have you know I spent time on organizing and structuring my code with early JS design patterns like IIFEs to limit scope, lazy loading of modules, and minification.
Anyway, in my experience, AngularJS was the biggest attempt at making structured front-end applications, and it really appealed / appeals (Angular is still very popular apparently) to Java developers; biggest ones was its modularization (which wasn't a thing in JS yet), dependency injection, and testability.
When we started out with an app in Backbone (to replace a Flex app because it wouldn't work on the ipad), I actually advocated against things like testing, thinking that the majority of functionality would be in the back-end. I was wrong, and the later AngularJS rebuild was a lot more intensive in front-end testing.
Of course, nowadays I'm repulsed by the verbosity, complexity and indirection of modern-day Angular code. or Java code for that matter.
Angular not only appeals to Java developers, it also appeals to .NET developers. TypeScript of course borrowed a lot from C# (having the same designer) and dependency injection, MVC patterns etc closely resemble .NET patterns.
Interestingly, new Angular is slowly moving away from these, introducing Signals for state management and standalone components, and I see these developers actually struggling a lot to adopt new Angular patterns.
Still, I believe Angular is a really great platform for building B2B or enterprise apps. It’s testing and forms handling is far ahead of every other SPA, and it actually fees like a cohesive framework where people have spent time designing things the right way; something I absolutely cannot say about react frameworks such as Next.js or Remix.
And GWT which allowed literally running Java on the web (without a plugin, it compiles Java to JS). It still exists and is maintained but not a Google project anymore despite the name.
PS and I forgot to mention, new Angular patterns such as Signals and standalone components great cut down on the boilerplate and the verbosity. It’s not (and will never be) something like SolidJS, but each new version is clearly moving away from the heavy OO-based and multi layered patterns.
Low-bandwidth/spotty connections (combined with aggressive caching) are one of the strongest cases in favor of SPAs (emphasis on the A for Application, not website). Visit (and cache) the entire frontend for the app when you have a good-enough connection, then further use of the app can proceed with minimal bandwidth usage.
It really depends. There’s a lot of SPAs which are practically unusable on a bad connection simply because it’s a challenge to even get the whole thing loaded in the first place. There’s been several occasions where I’ve had poor cell connectivity and a huge chunk of the web was effectively off limits due to this.
So in addition to aggressive caching, I’d say keeping the app’s file size down is also pretty important to making it useful on poor connections. That’s frequently something that’s barely optimized at all, unfortunately.
I work on an SPA with hundreds of screens. We package it using off the shelf tooling with barely any configuration. All of the code and styling is still far under a megabyte gzipped.
So unless it is an all text app, the size of the code bundle is probably going to be quickly dwarfed by the size of media like images, animated images, or videos.
If a site has an SPA with a, say, 3mb code bundle, I think in most cases, that’s not an architecture issue. It’s probably an issue of poor engineering and switching to a MPA is not suddenly going to make those engineers competent.
All three comments to this thread have missed the point that OP said installable SPA, not website SPA. This means the primary bundle is downloaded offline and only API network requests are necessary.
I think with brotli over SSE you can do just fine with 3G and bad networks without needing to be a SPA. Keeping almost all state on the sever make realtime collaborative apps much simpler than data sync you need with SPA. This demo has zero client side state and handles concurrent interactions over a billion data points [1].
Sadly the art of error handling is often neglected by some SPA developers. I suspect that failures on the client are also less tracked this is tolerated by businesses.
Latency might even be more relevant than bandwidth. Especially if it's a good SPA, that uses optimistic updates (or sync), and some kind of caching for fetching data (tanstack query or similar).
Anyone attending a major event, or in a disaster zone. Things are getting better, but if you live near a ball park or something, there will be periodic times when your cellular Internet is unusable
Crazy hills will do wireless in anywhere. In a rural area I am maybe a mile from the tower as the neutrino flies but a photon can’t make it. I have a bar of 5G in the pasture in front of the house but in the house it promises a bar of LTE but whether I can get out a text (w/o WiFi) depends on atmospheric conditions.
Out in Cali they have ring of fire conditions.
Too much crowding can do it too. A decade ago I was going to Cali a lot and thought it was funny to see so many ads for geospatial apps on TV that always showed a map of San Francisco when for me SF was where GIS went to die. Taking my Garmin handheld to the roof of the Moscone center because it was the only place it could get a clear view of the sky to sync up with GPS, so many twisty little streets that routing algorithms would struggle…. Being guided around to closed restaurant after closed restaurant and walking past 10+ open ones because a co-worker was using Yelp, etc.
Back around 2001 I visited South Carolina, and it was like being transported to the future of mobile internet. They had some kind of high-bandwidth cellular setup in the area that was far ahead of the rest of the country at the time, I think I recall it being around 20Mbit wireless. I was told the area was a testing ground for new tech. I was kind of shocked that somewhere that seemed so stuck in the past had such cutting edge tech deployed. I thought why is this not in SF??
I live in the middle of a major UK city, which is one of the most visited tourist destinations in the UK (if not the world). There are massive gaps of mobile coverage in the city - 5G is spotty at best, and it regularly falls back to much older protocols in the city. There are dead zones where you can literally walk 6 ft and drop to 0 coverage, and walk another 6ft and be on full blown 5G. Apps and sites like uber, twitter, Reddit, instagram all handle these awfully.
Often times my house. I live in one of the 20 largest metro areas in the US. There is a cellular dead spot around my house, seemingly from AT&T and Verizon. Phones work, but barely. Pages with high data demands become a problem.
There are folks who work with US-based nonprofits, NGOs, and agencies who live all over the world, including regions where local internet access is either non-existent or very slow. Some US-based organizations they work with have had to set up low-bandwidth methods of communicating. Yes - sometimes geosynchronous satellites are the only connectivity available.
There is this post about a experiment on google where they reduced the page weight and the traffic went up instead of down. That was because it open the site to countries with low internet bandwidth https://blog.chriszacharias.com/page-weight-matters
> in exchange for having really small network requests after the load.
I'd love to see examples of where this is actually the case and it's drastically different from just sending HTML on the wire.
Most SPAs I've worked on/with end up making dozens of large calls after loading and are far far slower than just sending the equivalent final HTML across from the start. And you can't say that JSON magically compresses somehow better than HTML because HTML compresses incredibly well.
Most arguments about network concerns making SPAs a better choice are either propaganda or superstition that doesn't pan out in practice.
BREACH would be the relevant attack for content-encoding compression, it's only good for guessing the content of the response that can't actually be read otherwise, i.e. stealing a csrf token in cross-site requests, requires that the server echo back a chosen plaintext in the response (e.g. a provided query string), and takes thousands of requests to pull it off.
It's a vanishingly small number of things that are actually vulnerable to this attack, and I've never even heard of a successful real-world exploit (tho it's not like the attackers that might use this go and tell everyone).
With HTML you have to send both the template and the data. With json, it's just the data. So it's less information total. It should compress a little better, but I don't have stats to back that up.
Your comment then falls under my "superstition" label.
My experience has been that the HTML version will send overall less data since it contains precisely what is required by the UI and nothing more. The JSON APIs try to be "generic" across clients and either send more than the client needs or send less and cause the client to wait on a waterfall of requests to load everything.
You should always run benchmarks for your use case but the majority of web projects are not Figma or AutoCAD and benefit drastically from simpler approaches. A single compressed HTML response will beat a cascade of JSON ones every time.
Why does this have to be the baseline architecture when you can render the HTML on the server with the template and data? Why send the data and the JavaScript to parse that data and transform it into HTML in a users browser when you can do it on the server?
For requests after the first, you can still continue to send the rendered HTML to be placed into the document. Here's an example using HTMX: https://htmx.org/examples/lazy-load/
HTML templates are still text, text compressed well. As with all these discussions “it depends, profile it” is the only answer. People blindly assuming that X is better is why things are slow and shitty in the first place
Do browsers have trouble loading and rendering HTML in 2025? Page load should be blazing fast if it's not loaded down with a bunch of other stuff that hasn't already been cached.
They don't. The reason to do it is not to save bytes, but because you have a dynamic page that needs to respond to the user without doing a full page refresh on every minor interaction.
If you work at a place that has a modern CI/CD pipeline then your multiple deployments per day are likely rebuilding that large bundle of JS on deploy and invalidating any cache.
HTTP 2 has been adopted by browsers for like 10 years now and its multiplexing makes packaging large single bundles of JS irrelevant. SPA’s that use packaging of large bundles doesn’t leverage modern browser and server capabilities.
> HTTP 2 has been adopted by browsers for like 10 years now and its multiplexing makes packaging large single bundles of JS irrelevant
H2 doesn’t make packing irrelevant… there’s still an IPC overhead with many small files… and larger bundles tend to compress better (though compression dictionaries might help here)
Languages... (is jQuery a language, I guess so, let's go with that)... live in a context... there is culture, tooling, libraries, frameworks. Some languages have good culture, some have bad culture. I guess it's not even so black and white: language have good or bad culture in different areas: testing, cleanliness, coding standards, security, etc. If jQuery is misused in the hands of bad programmers ALL THE TIME, that becomes the culture. Not much to do about it anymore once the concrete has set. You can't still be an exception to rules, good for you! But that doesn't change the culture...?
> If jQuery is misused in the hands of bad programmers ALL THE TIME, that becomes the culture.
My bet is that everyone here both agrees with you and is able to replace "jQuery" with "HTML", "CSS", and "JavaScript" to reach similar conclusions about the cultures of each. The problem is bad programmers, not the tech.
It drives me crazy when it used together with React —- I want to have one authoritative copy of the state of my app, and jQuery bypasses that, at least if I’m using controlled forms.
Now I used to hate uncontrolled forms but now I like react-hook-form.
> SPAs make sense when your users have long sessions in your app.
SPAs also make sense when you want to decouple the front end from the back end, so that you have a stable interface like a RESTful API and once AngularJS gets deprecated you can move to something else, or that when your outdated Spring app needs to be updated, you'll have no server side rendering related dependencies to update (or that will realistically prevent you from doing updates, especially when JSF behavior has changed between versions, breaking your entire app when you update).
> When it is worth the pain to load a large bundle in exchange for having really small network requests after the load.
The slight difference in user experience might not even enter the equation, compared to the pain that you'd have 5 years down the line maintaining the project. As for loading the app, bundle splitting is very much a thing and often times you also get the advantage of scoped CSS (e.g. works nicely in Vue) and a bunch of other things.
I don't know if a bunch of sloppy jQuery modules were ever really a viable option for an SPA. People tried to do it, sure, but I'd say the SPA era really started with backbone.js
Depends on the definition of SPA, but in the days of jquery, I hardly consider any of that single page app. For example, the server rendered page had most of the html initial rendered, jquery would attach a bunch of listeners, and then on an update it incrementally. If lucky, we had a duplicated x-template-mustache tag which had a logic-less template that we could use to update parts. Jquery and duplication was the “problem” which drove everyone to SPAs.
But you were talking about code, not data, hence my question. Also, Amazon doesn’t need to be that way (and wasn’t twenty years ago, the motivating period we are talking about).
SPAs are nice when your app requires complex state; multiple rows of nested tabs, modals, multiple interlinked select inputs which load data dynamically, charts or graphs which can lazy-load data and update on the fly in response to user actions.
There is a certain level of complexity beyond which you need to load data on the fly (instead of all up front on page load) and you literally cannot avoid an SPA. Choosing to build an SPA is not just some arbitrary whimsical decision that you can always avoid.
Sometimes people just go straight to SPA because they're unsure about the level of complexity of the app they're building and they build an SPA just to be sure it can handle all the requirements which might come up later.
One of my first jobs involved rebuilding a multi-page EdTech 'website' as an SPA, the multi-page site was extremely limiting, slow and not user-friendly for the complex use cases which had to be supported. There was a lot of overlay functionality which wouldn't make sense as separate pages. Also, complex state had to be maintained in the URL and the access controls were nuanced (more secure, easier enforce and monitor via remote API calls than serving up HTML which can mix and match data from a range of sources).
I think a lot of the critiques of SPAs are actually about specific front end frameworks like React. A lot of developers do not like React for many of the reasons mentioned like 'resetting scrollbars' etc... React is literally a hack to try to bypass the DOM. It was built on the assumption that the DOM would always be unwieldy and impossible to extend, but that did not turn out to the the case.
Nowadays, with custom web components, the DOM is actually pretty easy to work with directly but info about this seems to be suppressed due to React's popularity. Having worked with a wide range of front end frameworks including React for many years, the developer experience with Web Components is incomparably superior; it works exactly as you expect, there are no weird rendering glitches or timing issues or weird gotchas that you have to dig into to. You can have complex nested components; it's fast and you have full control over the rendering order... You can implement your own reactivity easily by watching attributes from inside a Web Component. The level of flexibility and reliability you get is incomparable to frameworks like React; also you don't need to download anything, you don't need to bundle any libraries (or if you do, you can choose how to bundle them and to what extent; you have fine-grained control over the pre-loading of scripts/modules), the code is idiomatic.
> there are no weird rendering glitches or timing issues or weird gotchas that you have to dig into to.
Ehm... define the Web Component render blocking in the head, because you want to prevent FOUCs.
Then try to access the .innerHTML of your Web Component in the connectedCallback
i remember seeing web components years ago, it sounds like they've improved a lot.
what do you do about the lack of (i assume) ecosystem? due to the ready ubiquity there's practically a library for everything. do you find that using WC you are having to hand roll a lot? i don't mean to be a package slave but for complex and tedious things like graphs / charts.
Not sure what kind of clients you work with, but in my experience this is actually accurate and you won’t believe how many times I had to put my hands and fix SPAs that should have been a static website to begin with.
I think this is a consequence of a generation of webdevs, especially front-end, that graduated from bootcamps teaching them only JS frameworks and React as if that’s the only way the Web works.
They were given a hammer, told how to use it, and everything then just looks like a nail.
> When it is worth the pain to load a large bundle in exchange for having really small network requests after the load
...and yet, i keep running into web (and even mobile apps) that load the bundle, and subsequent navigation is just as slow, or _even slower_. Many banking websites, checking T-Mobile balance... you wait for the bundle to load on their super-slow website, ok, React, Angular, hundreds of megs, whatever. Click then to check the balance, just one number pulled in as tiny JSON, right? No, the website starts flashing another skeleton forever, why? You could say, no true SPA that is properly built would do that, but I run into this daily, many websites and apps made by companies with thousands of developers each.
The real reason SPAs are popular is because JavaScript is the new Visual Basic and there are millions of developers that know nothing else.
Workforce market forces like that have a vastly greater effect than “bandwidth optimisation”.
My evidence for this is simple: every SPA app I’ve ever seen is two orders of magnitude slower than ordinary HTML would have been. There is almost never a bandwidth benefit in practice. Theoretically, sure, but every app I come across just dumps half the database down the wire and picks out a few dozen bytes in JS code. There's a comment right here in this discussion advocating for this! [1]
Another angle is this: if you had a top 100 site with massive bandwidth costs, then sure, converting your app to a SPA might make financial sense. But instead what I see is tiny projects start as a SPA from day one, and no chance that their bandwidth considerations — either cost or performance — will ever be a factor. Think internal apps accessed only over gigabit Ethernet.
I’ve seen static content presented as a SPA, which is just nuts to me.
Does a real-world example of bandwidth saving even exist for SPAs? It’s always the other way around where what could’ve been a single page load, ends up being 6 separate asynchronous calls to different APIs to fetch random bits and pieces while the user stares at spinners.
And the frontend-backend paradigm has seeped into the engineering culture and even the non-engineers on the team understand things in those terms. The main way we break apart work into tickets is API endpoints and client-side UI stuff.
This. The mental model of an API with a frontend deployed as static resources just happens to be very attractive. Even more so when the SPA isn't the only frontend, or when you don't know that the SPA will remain the only frontend forever. When you have an SPA sitting on top of an API, introducing new clients for feature subsets (e.g. something running on a Garmin watch) becomes trivial.
If you have a huge org working on the project you might actually succeed in sticking to that architecture even when serving as plain old HTML, but smaller teams are likely to eventually write full stack spaghetti (which might still be fine for some use cases!). Once there was a fashionable term "progressive web app", with manifest workers optionally moving some backend stuff into the browser for offline-ish operation, and these days I also see a parallel pattern: progressively moving a browser UI into an electron-esque environment, where you can features requiring more local access than the browser would allow.
> introducing new clients for feature subsets (e.g. something running on a Garmin watch) becomes trivial.
This never happens, for some values of never.
When a SPA app is initially developed, the "client" and the "API" are moulded to each other, like a bespoke piece of couture tailored to an individual's figure. Hand-in-glove. A puddle in a depression.
There is absolutely no way that some other category of platform can smoothly utilise such a specialised, single-purpose API without significant changes.
The reality is that most SPA apps are monoliths, even if the client app and the API app are in different Git repos in order to pretend that this is not the case.
>every SPA app I’ve ever seen is two orders of magnitude slower than ordinary HTML would have been.
I'd argue then you don't have an SPA. However I don't see how you could have a application like Figma or Discord and say "ordinary HTML is faster" (or even possible).
You mean a chat cliënt? That seems a good worse case scenario.
If you limit history to the most recent message (and have an link to the archive at the top) you could simply reload the entire page on some interval that declines with message frequency (and when you submit the form)
Since the html document is pretty much empty the reload happens so fast you won't see the flashing. With transitions it would be perfectly smooth.
With modern css you can put the elements out of order. You can simply append the new line to the end of the html document that represents the channel. (And to the archive) Purging a few old lines can be done less frequently.
I haven't tried it but it should work just fine. I will have to build it.
Initial load will be 100x faster. The page reloads will be larger but also insanely robust.
The things I was wondering about were: 1) can a non spa chat client be as good as a spa. 2) at what point is a spa justified. (is chat enough?)
Phone calls and live streams are things for which a tab needs to stay open. If you want to do other things simultaneously both the browser and the OS could facilitate it - but do so rather poorly.
For ever one real "app" like Figma there are hundreds of web pages with some forms and light interactivity. Numerically there are far more enterprise LoB apps than there are true web applications that SPAs are well suited for.
"Every SPA app I've ever seen". I'm yet to see a fast one. Maybe they exist! I wouldn't know.
YouTube, for me, is unfathomably slow. It takes about a minute before I can get a specific song in one of my playlists playing. Every view change is 5-20 seconds, easily.
Facebook and the like now show polyfills for 10-30 characters of text, because hundreds of thousands of servers struggle to deliver half a line of text over terabits of fibre uplinks. Meanwhile my 2400 baud modem in the 1990s filled the screen with text faster!
Jira famously was so slow that this would never fail to be mentioned here any time it was mentioned. Service Now is better, but still slow for my tastes.
Etc...
If you disagree, link to me a fast SPA app that you use on a regular basis.
PS: Just after writing this, I opened a Medium article, which used -- I assume -- SSR to show me the text of the article quickly, then replaced it with grey polyfills, and then 6 full seconds later it re-rendered... the same static text with a client-side JavaScript SPA app. 100 requests, 7 MB, for 2 KB of plain text.
> SPAs make sense when your users have long sessions in your app. When it is worth the pain to load a large bundle in exchange for having really small network requests after the load.
Only for certain types of applications… the route change time for many SPA’s is way higher than for the equivalent MPA
Java applets and ASP.Net did have a superficial answer to this, as well as Flash, but they varied in terms of their ability to actually function as raw web interfaces using the URL to navigate between sections.
Being able to reliably and programmatically interact with client-side storage and the url, as well as improvements in DOM apis and commodification of hardware with more ram and faster faster CPUs, among many others factors, seem to have contributed.
https://extjs.cachefly.net/ext-3.4.0/examples/ unfortunately most of the data examples don’t work, but I believe everyone browsing the web back then remembers this blue tinted interface.
oh man now THAT is a real blast from the past. oh man, pretty classic "where did all the time go?"... "oh yeah i forgot that i spent like 100s of hours battling with sencha back in the day". selective memory isn't always a bad thing, i guess.
The rule is either "uses html to render" or an even looser "renders in the browser". That doesn't seem like a deep constraint to me. You exclude a couple very popular but historical plugins where the browser set up a rectangle and handed it off to an external piece of code, and pretty much nothing else.
I mean, OK, whatever works... we put up curtains and close them when the sun hits the windows, but if smearing yogurt on your windows is more your vibe, I ain't judging.
I live in a two floor home in So CO. I use what I learned very young: hot air rises, cool air sinks.
So when inside and outside temperatures match during the rising morning temps, I seal up the downstairs, and close all blinds except for north facing windows. As a habit, it takes almost no time and helps us maintain our status as the bottom 2% of energy consumers in the region.
How would the database know whether the other app layers depend on that value or not? You could absolutely have an app that does not require data in a specific field to function, yet all records happen to have data. This is actually fairly common in single-tenant apps, where some tenants populate a field and others do not. You need to look at how the data is used across the entire stack to know whether or not it should be nullable, not whatever the current data happens to be.
The script example in TFA is just a starting point. I believe you would still manually go through all the columns it finds and decide which ones are actually supposed to be nullable and which ones don't need to be. As the article said, nullable fields that don't contain null could be a sign of incomplete migrations and such.
It doesn't. That's why it's the responsibility of the application layer to correctly implement the data model of the database and not the other way around
I'd say the good from photography, aside from more options for creativity, is documentation. Journalism without photography would be of lower value. Photos are highly impactful in education, both formal and informal, to get visuals of the world beyond your immediate reach. Documentation of history, in particular local and family history, is far more powerful since photography came along.
I'd say the commercialization of it and the follow-on effects you mentioned are the bad, not the good.
I'm thinking of women's fashions in the U.S. — perhaps spurred on by depictions of the latest Parisian-wear from Godey's Lady's Book up to the 1890's. Then the starlets of a young Hollywood I suppose kicked off the flapper craze of the 1920's in the U.S.?
An illustration of a fashionable Parisian though was probably adequate — a photograph not required. Photography perhaps made the latest fashion trends ubiquitous?
That aside, I treasure photography for giving me a glimpse into the ordinary lives of my ordinary family going back three and four generations. Having captured the arc of an entire life from childhood, to graduation from "Normal" school, marriage, motherhood… And finally the sadder photos where they are old, comforted now by their adult daughter until the last photo in the series: their headstone.
I am thankful for all of that. I have found having the full span of a life captured in photographs to be sublime … sobering, grounding.
You should dig deeper into your market research. Analytics platforms do exist that provide this level of detail. That doesn't mean you shouldn't build another one - the fact they already exist means the problem is validated. But it does make you look like someone just dipping their toes into a new idea, not an expert in this problem space. There is nothing wrong with that - everyone is new to their work/market at the beginning. But you don't want to be raising a big flag announcing it. You need to be differentiating yourself from what exists, not acting as if you are new. And say that you measure A/B tests, don't describe them as if you had never heard the term.
But to answer the question of what we think about the product itself... I kinda despise client-side analytics that make calls out to 3rd parties (even if it is just a tracking pixel). I understand the value to marketers and PMs. But if you want something new in this problem space, send data to the app server and build server-side processing. Not only is that a cleaner client-side experience, it won't end up getting blocked by anti-spyware extensions.
Also, you are shooting yourself in the foot by measuring landing pages. They are already a fairly weak form of marketing - they are early idea validation, not even a real product. Optimizing at this point is borderline bike-shedding. You need these types of analytics when you have a product ready to roll and are optimizing conversions. Landing pages simply are not high value enough to invest this much effort.
Thanks for the feedback and ideas. Yes, the current version is maybe not unique. I am trying to build the platform that will analyze metrics but also tell how to improve a landing page or a product page using AI. This one is a super small MVP I want to validate.
> send data to the app server and build server-side processing. Not only is that a cleaner client-side experience, it won't end up getting blocked by anti-spyware extensions
This is also a good suggestion. I will explore this option as well.
Yet searching just on "you use ruffle!" does not return these sites. And those sites do not actually have that content. Which means it would have been quite difficult to notice this accidentally. Smells like an indexing problem or a marketing stunt or both.
I can't find it anymore, but I remember way back when Rails first launched, DHH saying something along the lines of: "I don't know databases, so I found one way that worked and ran with it as an ORM." I'm sure the dude has learned more since then, so Rails has improved, but if you are asking if you should listen to DHH, you need to know that he is a pragmatist, not an idealist. That is not a bad thing... but if you are an idealist, it would explain why his work does not appeal to you.
I'm not doubting DHH skills as web developer, actually, the reason I've dedicated my time listening to the podcast and writing this post is because I kind-of see his point. However, during my 10 years career (which is nothing compared to DHH, and I'm not saying that I'm right and he is wrong) I've learned what works and what doesn't (at least for me). And looking at Ruby on Rails my feeling is that it shouldn't work, but as stated in the post description, I've heard different people saying that Ruby is great- And so I'm wondering if they are just a small niche group or if it is as good as people say it is.
> "I don't know databases, so I found one way that worked and ran with it as an ORM"
This doesn't really play in favor of Ruby on Rails, the same applies to Typescript and Java (and I'm quite sure to any language that is popular with web developers).
> It seems pretty useless if it’s just going to regurgitate...
...and now you have realized why so many of us do not buy into the hype. Because that is all it does - regurgitate. Hey, there are valid use cases where that functionality is awesome. But LLMs have functionality limits. We're still in the time frame where people are figuring out what those limits are.
Totally fair. LLMs absolutely have limits. But calling it just regurgitation misses what makes them powerful. With the right setup, you can surface contradictions, edge cases, and patterns across millions of perspectives instantly. Figuring out the limits is part of using them well.
I've been in scenarios when such UIs existed. But they always were protected so that only system admins had access to it, as a way to let them make quick queries in-app instead of having to pull up other tools. There was no additional access granted, it was just a question of UX, and we expected that anything beyond a simple ad-hoc query would be done with real tools, not in the app.
Also, the underlying databases were secured. Just because you can send a query to a database does not mean you are exposing additional data - database-level security exists and works well.
If I had to greenlight such a UI, here's my list of non-negotiables:
- Each human user has to use their own dedicated account.
- Every query leaves a trail that can't be tampered with.
- If the database contains sensitive data (personal info, payment data, ...) then the database provides a snapshot guarantee, so that we can inspect whose personal data/payment data were leaked by query X executed at instant T by a bad actor.
- List of humans who can access the feature is vetted regularly.
- Any access that can modify the data in database requires at least two separate humans to agree on the query before it can run.
- Any query that can hamper application throughput is either forbidden, happens on a replica database, or requires at least two separate humans to agree before it can run.
Back in the first dotcom era I worked at a place that had a "SQL page" in the website. Just a textarea where you could enter any query and run it. It was wide open, protected only by the fact that it wasn't linked anywhere (there was no way to get to it other than entering the URL directly into the browser). It was there for the reasons you list, a quick way to verify that the database connections were working and to run ad-hoc queries for support/troubleshooting.
It was thought to be safe enough, because "nobody could guess" the URL of that page.
Their charts measure 50 weeks, not forever. So an 80% yearly churn is not exactly a good statistic. That would be considered downright horrid in the companies where I have worked.
Smooth transitions are a nice side effect, but not the reason for an SPA. The core argument of the article, that client-side routing is a solution for page transitions, is a complete misunderstanding of what problems SPAs solve. So absolutely, if you shared that misunderstanding of SPAs and used them to solve the wrong problem, this article is 100% correct.
But SPAs came about in the days of jQuery, not React. You'd have a complex app, and load up a giant pile of jQuery spaghetti, which would then treat each div of your app is its own little mini-app, with lots of small network requests keeping everything in sync. It solved a real problem, of not wanting to reload all that code every time a user on an old browser, with a slow connection, changed some data. jQuery made it feasible to do SPAs instead.
Later, React and other frameworks made it less spaghetti-like. And it really took off. Often, for sketchy reasons. But the strongest argument for SPAs remains using them as a solution to provide a single-load of a large code bundle, that can be cached, to provide minimal network traffic subsequent to the load when the expected session time of a user is long enough to be worth the trouble of the complexity of an SPA.
reply