Hacker News new | past | comments | ask | show | jobs | submit login

If you're behind an overloaded geosynchronous satellite then no JS at all just moves the pain around. At least once it's loaded a JS-heavy app will respond to most mouse clicks and scrolls quickly. If there's no JS then every single click will go back to the server and reload the entire page, even if all that's needed is to open a small popup or reload a single word of text.



This makes perfect sense in theory and yet it's the opposite of my experience in practice. I don't know how, but SPA websites are pretty much always much more laggy than just plain HTML, even if there are a lot of page loads.


It often is that way, but it's not for technical reasons. They're just poorly written. A lot of apps are written by inexperienced teams under time pressure and that's what you're seeing. Such teams are unlikely to choose plain server-side rendering because it's not the trendy thing to do. But SPAs absolutely can be done well. For simple apps (HN is a good example) you won't get too much benefit, but for more highly interactive apps it's a much better experience than going via the server every time (setting filters on a shopping website would be a good example).


Yep. In SPAs with good architecture, you only need to load the page once, which is obviously weighed down by the libraries, but largely is as heavy or light as you make it. Everything else should be super minimal API calls. It's especially useful in data-focused apps that require a lot of small interactions. Imagine implementing something like spreadsheet functionality using forms and requests and no JavaScript, as others are suggesting all sites should be: productivity would be terrible not only because you'd need to reload the page for trivial actions that should trade a but of json back and forth, but also because users would throw their devices out the window before they got any work done. You can also queue and batch changes in a situation like that so the requests are not only comparatively tiny, you can use fewer requests. That said, most sites definitely should not be SPAs. Use the right tool for the job


> which is obviously weighed down by the libraries, but largely is as heavy or light as you make it

One thing which surprised me at a recent job was that even what I consider to be a large bundle size (2MB) didn't have much of an effect on page load time. I was going to look into bundle splitting (because that included things like a charting library that was only used in a small subsection of the app). But in the end I didn't bother because I got page loads fast (~600ms) without it.

What did make a huge different was cutting down the number of HTTP requests that the app made on load (and making sure that they weren't serialised). Our app was originally going auth by communicating with Firebase Auth directly from the client, and that was terrible for performance because that request was quite slow (most of second!) and blocked everything else. I created an all-in-one auth endpoint that would check the user's auth and send back initial user and app configuration data in one ~50ms request and suddenly the app was fast.


In many cases, like satellite Internet access or spotty mobile service, for sure. But if you have low bandwidth but fast response times, that 2mb is murder and the big pile o requests is NBD.If you have slow response times but good throughput, the 2MB is NBD but the requests are murder.

An extreme and outdated example, but back when cable modems first became available, online FPS players were astonished to see how much better the ping times were for many dial up players. If you were downloading a floppy disk of information, the cable modem user would obviously blow them away, but their round trip time sucked!

Like if you're on a totally reliable but low throughput LTE connection, the requests are NBD but the download is terrible. If you're on spotty 5g service, it's probably the opposite. If you're on, like, a heavily deprioritized MVNO with a slower device, they both super suck.

It's not like optimization is free though, which is why it's important to have a solid UX research phase to get data on who is going to use it, and what their use case is.


My experience agrees with this comment – I’m not sure why web browsers seem to frequently get hung up on only some Http requests at times, unrelated to the actual network conditions. Ie: in the browser the HTTP request is timing out or in a blocked state and hasn’t even reached the network layer when this occurs. (Not sure if I should be pointing the finger here at the browser or the underlying OS). However, when testing slow / stalled loading issues, this (the browser itself) is frequently one of the culprits- however, this issue I am referring to even further reinforces the article/sentiments on this HN thread (cut down on the number of requests / bloat, and this issue too can be avoided.)


If if the request itself hasn't reached the network layer but is having a networky feeling hang, I'd look into DNS. It's network dependent but handled by the system so it wouldn't show up in your web app requests. I'm sure there's a way to profile this directly but unless I had to do it all the time I'd probably just fire up wireshark.


Chrome has a built-in, hard coded limit of six (6) concurrent requests. Once you have that many in flight, any subsequent requests will be kept in queue.

Now take a good, hard look at the number of individual resources your application's page includes. Every tracker, analytics crapware, etc. gets in that queue. So do all the requests they generate. And the software you wrote is even slower to load because marketing insisted that they must have their packages loading at the top of the page.

Welcome to hell.


Can you point me to a decently complex front end app, written by a small team, that is well written? I’ve seen one, Linear, but I’m interested to see more


An SPA I use not infrequently is the online catalog on https://segor.de (it's a small store for electronics components). When you open it, it downloads the entire catalog, some tens of MB of I-guess-it-predates-JSON, and then all navigation and filtering is local and very fast.


Having written a fair amount of SPA and similar I can confirm that it is actually possible to just write some JavaScript that does fairly complicated jobs without the whole thing ballooning into the MB space. I should say that I could write a fairly feature-rich chat-app in say 500 kB of JS, then minified and compressed it would be more like 50 kB on the wire.

How my "colleagues" manage to get to 20 MB is a bit of mystery.


> How my "colleagues" manage to get to 20 MB is a bit of mystery.

More often than not (and wittingly or not) it is effectively by using javascript to build a browser-inside-the-browser, Russian doll style, for the purposes of tracking users' behavior and undermining privacy.

Modern "javascript frameworks" do this all by default with just a few clicks.


There's quite some space between "100% no JS" and "full SPA"; many applications are mostly backend template-driven, but use JS async loads for some things where it makes sense. The vote buttons on Hacker News is a good example.

I agree a lot of full SPAs are poorly done, but some are do work well. FastMail is an example of a SPA done well.

The reason many SPAs are slower is just latency; traditional template-driven is:

- You send request

- Backend takes time to process that

- Sends all the data in one go.

- Browser renders it.

But full SPA is:

- You send request

- You get a stub template which loads JS.

- You load the JS.

- You parse the JS.

- You send some number of requests to get some JSON data, this can be anything from 1 to 10 depending on how it was written. Sometimes it's even serial (e.g. request 1 needs to complete, then uses some part of that to send request 2, and then that needs to finish).

- Your JS parses that and converts it to HTML.

- It injects that in your DOM.

- Browser renders it.

There are ways to make that faster, but many don't.


However, getting 6.4KB of data (just tested on my blog) or 60KB of data (a git.sr.ht repository with a README.md and a PNG) is way better than getting 20MB of frameworks in the first place.


Yes. It's inexcusable that text and images and video pulls in megabytes of dependencies from dozens of domains. It's wasteful on every front: network, battery, and it's also SLOW.


The crap is that even themes for static site generators like mkdocs link resources from cloudflare rather than including them in the theme.

For typedload I've had to use wget+sed to get rid of that crap after recompiling the website.

https://codeberg.org/ltworf/typedload/src/branch/master/Make...


Yeah, but your blog is not a full featured chat system with integrated audio and video calling, strapped on top of a document format.

There are a few architectural/policy problems in web browsers that cause this kind of expansion:

1. Browsers can update large binaries asynchronously (=instant from the user's perspective) but this feature is only very recently available to web apps via obscure caching headers and most people don't know it exists yet/frameworks don't use it.

2. Large download sizes tend to come from frameworks that are featureful and thus widely used. Browsers could allow them to be cached but don't because they're over-aggressive at shutting down theoretical privacy problems, i.e. the browser is afraid that if one site learns you used another site that uses React, that's a privacy leak. A reasonable solution would be to let HTTP responses opt in to being put in the global cache rather than a partitioned cache, that way sites could share frameworks and they'd stay hot in the cache and not have to be downloaded. But browsers compete to satisfy a very noisy minority of people obsessed with "privacy" in the abstract, and don't want to do anything that could kick up a fuss. So every site gets a partitioned cache and things are slow.

3. Browsers often ignore trends in web development. React style vdom diffing could be offered by browsers themselves, where it'd be faster and shipped with browser updates, but it isn't so lots of websites ship it themselves over and over. I think the SCIter embedded browser actually does do this. CSS is a very inefficient way to represent styling logic which is why web devs write dialects like sass that are more compact, but browsers don't adopt it.

I think at some pretty foundational level the way this stuff works architecturally is wrong. The web needs a much more modular approach and most JS libraries should be handled more like libraries are in desktop apps. The browser is basically an OS already anyway.


> CSS is a very inefficient way to represent styling logic which is why web devs write dialects like sass that are more compact, but browsers don't adopt it.

I don't know exactly which features you are referring to, but you may have noticed that CSS has adopted native nesting, very similarly to Sass, but few sites actually use it. Functions and mixins are similar compactness/convenience topics being worked on by the CSSWG.

(Disclosure: I work on style in a browser team)


I hadn't noticed and I guess this is part of the problem. Sorry this post turned into a bit of a rant but I wrote it now.

When it was decided that HTML shouldn't be versioned anymore it became impossible for anyone who isn't a full time and very conscientious web dev to keep up. Versions are a signal, they say "pay attention please, here is a nice blog post telling you the most important things you need to know". If once a year there was a new version of HTML I could take the time to spend thirty minutes reading what's new and feel like I'm at least aware of what I should learn next. But I'm not a full time web dev, the web platform changes constantly, sometimes changes appear and then get rolled back, and everyone has long since plastered over the core with transpilers and other layers anyway. Additionally there doesn't seem to be any concept of deprecating stuff, so it all just piles up like a mound of high school homework that never shrinks.

It's one of the reasons I've come to really dislike CSS and HTML in general (no offense to your work, it's not the browser implementations that are painful). Every time I try to work out how to get a particular effect it turns out that there's now five different alternatives, and because HTML isn't versioned and web pages / search results aren't strongly dated, it can be tough to even figure out what the modern way to do it is at all. Dev tools just make you even more confused because you start typing what you think you remember and now discover there are a dozen properties with very similar names, none of which seem to have any effect. Mistakes don't yield errors, it just silently does either nothing or the wrong thing. Everything turns into trial-and-error, plus fixing mobile always seems to break desktop or vice-versa for reasons that are hard to understand.

Oh and then there's magic like Tailwind. Gah.

I've been writing HTML since before CSS existed, but feel like CSS has become basically non-discoverable by this point. It's understandable why neither Jetpack Compose nor SwiftUI decided to adopt it, even whilst being heavily inspired by React. The CSS dialect in JavaFX I find much easier to understand than web CSS, partly because it's smaller and partly because it doesn't try to handle layout. The way it interacts with components is also more logical.


You may be interested in the Baseline initiative, then. (https://web.dev/baseline/2024)


That does look useful, thanks!


> Oh and then there's magic like Tailwind. Gah.

I'm not sure why Tailwind is magic. It's just a bunch of predefined classes at its core.


The privacy issues aren't just hypothetical, but that aside, that caching model unfortunately doesn't mesh well with modern webdev. It requires all dependencies to be shipped in full, no tree shaking to only include the needed functions. And separately as individual files.. and for people to actually stick to the same versions of dependencies


Can you show some real sites that were mounting such attacks using libraries?


Also wonder how many savings are still possible with a more efficient HTML/CSS/JS binary representation. Text is low tech and all but it still hurts to waste so many octets for such a relatively low amount of possible symbols.

Applies to all formal languages actually. 2^(8x20x10^6) ~= 2x10^48164799 is such a ridiculously large space...


The generalisation of this concept is what I like call the "kilobyte" rule.

A typical web page of text on a screen is about a kilobyte. Sure, you can pack more in with fine print, and obviously additional data is required to represent the styling, but the actual text is about 1 kb.

If you've sent 20 MB, then that is 20,000x more data than what was displayed on the screen.

Worse still, an uncompressed 4K still image is only 23.7 megabytes. At some point you might be better off doing "server side rendering" with a GPU instead of sending more JavaScript!


> "server side rendering" with a GPU instead of sending more JavaScript

Some 7~10 years ago I remember I saw somewhere (maybe here on HN) a website which did exactly this: you gave it an URL - it downloaded a webpage with all its resources, rendered and screenshot'ed it (probably in headless Chrome or something), and compared size of png screenshot versus size of webpage with all its resources.

For many popular websites, png screenshot of a page indeed was several times less than webpage itself!


I read epubs, and they’re mostly html and css files zipped. The whole book usually comes under a MB if there’s not a lot of big pictures. Then you come across a website and for just an article you have to download tens of MBs. Disable JavaScript and the website is broken.


If your server renders the image as text we'll be right back down towards a kilobyte again. See https://www.brow.sh/


Soo.. there should be a standardized web API for page content. And suddenly... gopher (with embedded media/widgets).


Shouldn’t HTTP compression reap most of the benefits of this approach for bigger pages?



Surely you're aware of gzip encoding on the wire for http right?


Sure, would be interesting to know how it would fare against purpose-made compression under real world conditions still...


gzip is fast. And it was made for real world conditions.


Brotli is a better example, as it was literally purpose made for HTML/CSS/JS. It is now supported basically everywhere for HTTP compression, and uses a huge custom dictionary (about 120KB) that was trained on simulated web traffic.

You can even swap in your own shared dictionary with a HTTP header, but trying to make your own dictionary specific to your content is a fool’s errand, you’ll never amortize the cost in total bits.

What you CAN do with shared dictionaries though, is delta updates.

https://developer.chrome.com/blog/shared-dictionary-compress... https://news.ycombinator.com/item?id=39615198


Is it supported by curl or any browser or no?


Every browser, curl yes.


False dichotomy, with what is likely extreme hyperbole on the JS side. Are there actual sites that ship 20 MB, or even 5 MB or more, of frameworks? One can fit a lot of useful functionality in 100 KB or less of JS< especially minified and gzipped.


Well, I'm working right now so let me check our daily "productivity" sites (with an adblocker installed):

  - Google Mail: Inbox is ~18MB (~6MB Compressed). Of that, 2.5MB is CSS (!) and the rest is mostly JS
  - Google Calendar: 30% lower, but more or less the same proportions
  - Confluence: Home is ~32MB (~5MB Comp.). There's easily 20MB of Javascript and at least 5MB of JSON. 
  - Jira: Home is ~35MB (~7MB compressed). I see more than 25MB of Javascript
  - Google Cloud Console: 30MB (~7MB Comp.). I see at least 16MB of JS
  - AWS Console: 18MB (~4MB Comp.). I think it's at least 12MB of JS
  - New Relic: 14MB (~3MB Comp.). 11MB of JS.
    This is funny because even being way more data heavy than the rest, its weight is way lower.
  - Wiz: 23MB (~6MB Comp.) 10MB of JS and 10MB of CSS
  - Slack: 60MB (~13MB Compressed). Of that, 48MB of JS. No joke.


I sometimes wish I could spare the time just to tear into something like that Slack number and figure out what it is all doing in there.

Javascript should even generally be fairly efficient in terms of bytes/capability. Run a basic minimizer on it and compress it and you should be looking at something approaching optimal for what is being done. For instance, a variable reference can amortize down to less than one byte, unlike compiled code where it ends up 8 bytes (64 bits) at the drop of a hat. Imagine how much assembler "a.b=c.d(e)" can compile into to, in what is likely represented in less compressed space than a single 64-bit integer in a compiled language.

Yet it still seems like we need 3 megabytes of minified, compressed Javascript on the modern web just to clear our throats. It's kind of bizarre, really.


js developers had this idea of "1 function = 1 library" for a really long time, and "NEVER REIMPLEMENT ANYTHING". So they will go and import a library instead of writing a 5 line function, because that's somehow more maintainable in their mind.

Then of course every library is allowed to pin its own dependencies. So you can have 15 different versions of the same thing, so they can change API at will.

I poked around some electron applications.

I've found .h files from openssl, executables for other operating systems, megabytes of large image files that were for some example webpage, in the documentation of one project. They literally have no idea what's in there at all.


That's a good question. I just launched Slack and took a look. Basically: it's doing everything. There's no specialization whatsoever. It's like a desktop app you redownload on every "boot".

You talk about minification. The JS isn't minified much. Variable names are single letter, but property names and more aren't renamed, formatting isn't removed. I guess the minifier can't touch property names because it doesn't know what might get turned into JSON or not.

There's plenty of logging and span tracing strings as well. Lots of code like this:

            _n.meta = {
                name: "createThunk",
                key: "createThunkaddEphemeralMessageSideEffectHandler",
                description: "addEphemeralMessageSideEffect side effect handler"
            };
The JS is completely generic. In many places there are if statements that branch on all languages Slack was translated into. I see checks in there for whether localStorage exists, even though the browser told the server what version it is when the page was loaded. There are many checks and branches for experiments, whether the company is in trial mode, whether the code is executing in Electron, whether this is GovSlack. These combinations could have been compiled server side to a more minimal set of modules but perhaps it's too hard to do that with their JS setup.

Everything appears compiled using a coroutines framework, which adds some bloat. Not sure why they aren't using native async/await but maybe it's related to not being specialized based on execution environment.

Shooting from the hip, the learnings I'd take from this are:

1. There's a ton of low hanging fruit. A language toolchain that was more static and had more insight into what was being done where could minify much more aggressively.

2. Frameworks that could compile and optimize with way more server-side constants would strip away a lot of stuff.

3. Encoding logs/span labels as message numbers+interpolated strings would help a lot. Of course the code has to be debuggable but hopefully, not on every single user's computer.

4. Demand loading of features could surely be more aggressive.

But Slack is very popular and successful without all that, so they're probably right not to over-focus on this stuff. Especially for corporate users on corporate networks does anyone really care? Their competition is Teams after all.


This is mind blowing to me. I expect that the majority of any application will be the assets and content. And megabytes of CSS is something I can't imagine. Not the least for what it implies about the DOM structure of the site. Just, what!? Wow.


too much crap holy and this is worse case scenario with adblock


I just tried some websites:

    - https://web.whatsapp.com 11.12MB compressed / 26.17MB real.
    - https://www.arstechnica.com 8.82MB compressed / 16.92MB real.
    - httsp://www.reddit.com 2.33MB compressed / 5.22 MB real.
    - https://www.trello.com (logged in) 2.50MB compressed / 10.40MB real.
    - https://www.notion.so (logged out) 5.20MB compressed / 11.65MB real.
    - https://www.notion.so (logged in) 19.21MB compressed / 34.97MB real.


Well, in TFA, if you re-read the section labeled "Detailed, Real-world Example" yes, that is exactly what the person was encountering. So no hyperbole at all actually.


I agree with adding very little JavaScript, say 1kB https://instant.page/ to make it snappier.


I'm getting almost 2MB (5MB uncompressed) just for a google search.


Getting at least “n”kb of html with content in it that you can look at in the interim is better than getting the same amount of framework code.

SPA’s also have a terrible habit of not behaving well after being left alone for a while. Nothing like coming back to a blank page and having it try to redownload the world to show you 3kb of text, because we stopped running the VM a week ago.


Here’s something that’s not true: in js, the first link you click navigates you. In the browser, clicking a second link cancels the first one and navigates to the second one.

GitHub annoys the fuck out of me with this.


Yeah, right. GitHub migrated from serving static sites to displaying everything dynamically and it’s basically unusable nowadays. Unbelievably long load times, frustratingly unresponsive, and that’s on my top spec m1 MacBook Pro connected to a router with fiber connection.

Let’s not kid ourselves, no matter how many fancy features, splitting, optimizing, whatever you do, JS webapps may be an upgrade for developers, they’re a huge downgrade for users in all aspects


Every time I click a link in GitHub, and watch their _stupid_ SPA “my internal loading bar is better than yours” I despair.

It’s never faster than simply reloading the page. I don’t know what they were thinking, but they shouldn’t have.


I have an instance of Forgejo and it’s so snappy. Even though I’m the only user, but the server is only 2GB, 2vcores with other services present. Itp

On the other side, Gitlab doesn’t work with JS disabled.


Between massacring the UX and copilot I've more or less stopped engaging with github. I got tempted the other day to comment on an issue and it turns out the brain trust over at Microsoft broke threaded comment replies. They still haven't fixed keyboard navigation in their bullshit text widget.

I could put up with the glacial performance if it actually worked in the first place, but apparently adding whiz bang "AI" features is the only thing that matters these days.

The whole thing smacks of a rewrite so someone could get a bonus and/or promotion.


Surprisingly my experience of GitLab is even worse! How's yours? BitBucket wasn't much better from memory either. Seems like most commercial offerings in this spaces suck.


I've been using Sourcehut. I respect Drew's commitment to open source, but I think that a lot of the UX misses the mark. For most things I really don't want an email based work flow and some pieces feel a bit disjointed. Overall though it has most of the features I want, and dramatically less bullshit than Github.


Loading an entire page with cached pictures is more or less instant, connection wise, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: