Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A lot of this resonates. I'm not in Antartica, I'm in Beijing, but still struggle with the internet. Being behind the great firewall means using creative approaches. VPNs only sometimes work, and each leaves a signature that the firewall's hueristics and ML can eventually catch onto. Even state-mandated ones are 'gently' limited at times of political sensitivity. It all ends up meaning that, even if I get a connection, it's not stable, and it's so painful to sink precious packets into pointless web-app-react-crap roundtrips.

I feel like some devs need to time-travel back to 2005 or something and develop for that era in order to learn how to build things nimbly. In deficit of time travel, if people could just learn to open web tools and use its throttling tool: turn it to 3g, and see if their webapp is resilient. Please!



"I feel like some devs need to time-travel back to 2005 or something and develop for that era in order to learn how to build things nimbly."

No need to invent time travel, just let them have a working retreat somewhere with only bad mobile connection for a few days.


Amen to this. And give them a mobile cell plan with 1GB of data per month.

I've seen some web sites with 250MB payloads on the home page due to ads and pre-loading videos.

I work with parolees who get free government cell phones and then burn through the 3GB/mo of data within three days. Then they can't apply for jobs, get bus times, rent a bike, top up their subway card, get directions.


"But all the cheap front-end talent is in thick client frameworks, telemetry indicates most revenue conversions are from users on 5G, our MVP works for 80% of our target user base, and all we need to do is make back our VC's investment plus enough to cash out on our IPO exit strategy, plus other reasons not to care" — self-identified serial entrepreneur, probably


Having an adblocker (firefox mobile works with uBlock origin) and completely deactivate loading of images and videos can get you quite far with limited connection.


You're 100% right. uBlock Origin can reduce page weight by an astronomical amount.


uMatrix (unsupported but still works) reduces page weight and compute even more


If you enable the Advanced Features mode of Ublock Origin you get access to just about all the things uMatrix does


Except for a usable UI.


How so?


it allows javascript on the original site domain, but turns it off for external domains, buts then lets you selectively turn it back on, and remembers what you've selected. Turning off javascript turns off a lot of both addtional downloading and cpu cycles. for impatient people it's tedious, but for minimalists it's heaven.


I strongly suspect that the venn diagram of people that know how to minimize data usage, and indigent people being given free cell phones with limited data has precious little overlap.


Yes, but "qingcharles" said he works with them, so he can show it to them.


And disabling JavaScript


Yeah and then give them thousands upon thousands of paying customers with these constraints worth caring about


This is probably a deliberate decision on the part of the government. A lot of the justice system is designed to keep people in prison.


That makes little sense. If it was a deliberate plan, it would be much more effective (and cheaper) to not provide the cellphones and data plan in the first place.


Exactly. It's easy to attribute malice where incompetence is the answer. The root problem is that there is no feedback loop so that the government agency funding the program isn't properly looking at the product being delivered at the other end and putting measures in place to stop the contractors from taking advantage of the program to only deliver the barest minimum without tracking the effects on the end customer.


You see this all the time. Need to keep up appearances but the actually implementation is poor.


It's not deliberate by the government. And the situation is getting better. Basically some of the telcos being a bit greedy and only giving 3GB/mo data to the users and spending all their profits on paying people in the hoods $20 a pop to sign people up.

Recently I've started to see some contracts with vastly more data, including some unlimiteds.


Just put them on a train during work hours! We have really good coverage here but there's congestion and frequent random dropouts, and a lot of apps just don't plan for that at all.


There's no need for retreat. Chrome DevTools have "simulate slow connection" button.


Yeah - and do they use it? Does it help experiencing them the joy of wanting just a little text information, but having to load loads of other stuff first and your connection timing out? I am afraid to get the full experience, they actually need to have a bad connection.


My first education was in industrial design, there designers usually have pride in avoiding needless complexity and material waste.

Even if I do web services I try to do so with as little moving parts as needed and with huge reliance on trusted and standardized solutions. If see any addition of complexity as a cost that needs to be weighed against the benefits you hope it brings you will automatically end up with a lean and fast application.


A little time with embedded hardware will teach you a few things too.


I lived in Shoreditch for 7 years and most of my flats had almost 3G internet speeds. The last one had windows that incidentally acted like a faraday cage.

I always test my projects with throttled bandwidth, largely because (just like with a11y) following good practices results in better UX for all users, not just those with poor connectivity.

Edit: Another often missed opportunity is building SPAs as offline-first.


>> Another often missed opportunity is building SPAs as offline-first.

You are going to get so many blank stares at many shops building web apps when suggesting things like this. This kind of consideration doesn't even enter into the minds of many developers in 2024. Few of the available resources in 2024 address it that well for developers coming up in the industry.

Back in the early-2000s, I recall these kinds of things being an active discussion point even with work placement students. Now that focus seems to have shifted to developer experience with less consideration on the user. Should developer experience ever weigh higher than user experience?


>Should developer experience ever weigh higher than user experience?

Developer experience is user experience. However, in a normative sense, I operate such that Developer suffering is preferable to user suffering to get any arbitrary task done.


The irony for me is that I got into React because I thought that we could finally move to an offline-first SPA application. Current trends seem to go the opposite.


SPAs and "engineering for slow internet" usually don't belong together. The giant bundles usually guarantee slow first paint, and the incremental rendering/loading usually guarantees a lot of network chatter that randomly breaks the page when one of the requests times out. Most web applications are fundamentally online. For these, consider what inspires more confidence when you're in a train on a hotspot: an old school HTML forms page (like HN), or a page with a lot of React grey placeholders and loading spinners scattered throughout? I guess my point is that while you can take a lot of careful time and work to make an SPA work offline-first, as a pattern it tends to encourage the bloat and flakiness that makes things bad on slow internet.


London internet (and English internet in general) is just so bad.

Having lived in lots of countries (mainly developing) it’s embarrassing how bad our internet is in comparison


Oh, London is notorious for having... questionable internet speeds in certain areas. It's good if you live in a new build flat/work in a recently constructed office building or you own your own home in a place OpenReach have gotten to yet, but if you live in an apartment building/work in an office building more than 5 or so years old?

Yeah, there's a decent chance you'll be stuck with crappy internet as a result. I still remember quite a few of my employers getting frustrated that fibre internet wasn't available for the building they were renting office space in, despite them running a tech company that really needed a good internet connection.


We design for slow internet, react is one of the better options for it with ssr, code splitting and http2 push, mixed in with more off-line friendly clients like Tauri. You can also deploy very near people if you work “on the edge”.

I’m not necessarily disagreeing with your overall point, but modern JS is actually rather good at dealing with slow internet for server-client “applications”. It’s not necessarily easy to do, and there is almost no online resources that you can base your projects on if you’re a Google/GPT programmer. Part of this is because of the ocean of terrible JS resources online, but a big part of it is also that the organisations which work like this aren’t sharing. We have 0 public resources for the way we work as an example, because why would we hand that info to our competition?



Very sad. I use http/2 push on my website to push the CSS if there’s no same-origin referrer. It saves a full roundtrip which can be pretty significant on high latency connections. The html+css is less than 14kb so it can all be sent on the first roundtrip as it’s generally within TCP’s initial congestion window of about 10*1400.

The only other alternative is to send the CSS inline, but that doesn’t work as well for caching for future page loads.

103 Early Hints is not nearly as useful as it doesn’t actually save a round trip. It only works around long request processing time on the server. Also, most web frameworks will have a very hard time supporting early hints, because it doesn’t fit in the normal request->response cycle, so I doubt it’s going get much adoption.

Also it would be nice to be able to somehow push 30x redirects to avoid more round trips.


By far the lightest weight JS framework isn't React, it's no javascript at all.

I regularly talk to developers who aren't even aware that this is an option.


If you're behind an overloaded geosynchronous satellite then no JS at all just moves the pain around. At least once it's loaded a JS-heavy app will respond to most mouse clicks and scrolls quickly. If there's no JS then every single click will go back to the server and reload the entire page, even if all that's needed is to open a small popup or reload a single word of text.


This makes perfect sense in theory and yet it's the opposite of my experience in practice. I don't know how, but SPA websites are pretty much always much more laggy than just plain HTML, even if there are a lot of page loads.


It often is that way, but it's not for technical reasons. They're just poorly written. A lot of apps are written by inexperienced teams under time pressure and that's what you're seeing. Such teams are unlikely to choose plain server-side rendering because it's not the trendy thing to do. But SPAs absolutely can be done well. For simple apps (HN is a good example) you won't get too much benefit, but for more highly interactive apps it's a much better experience than going via the server every time (setting filters on a shopping website would be a good example).


Yep. In SPAs with good architecture, you only need to load the page once, which is obviously weighed down by the libraries, but largely is as heavy or light as you make it. Everything else should be super minimal API calls. It's especially useful in data-focused apps that require a lot of small interactions. Imagine implementing something like spreadsheet functionality using forms and requests and no JavaScript, as others are suggesting all sites should be: productivity would be terrible not only because you'd need to reload the page for trivial actions that should trade a but of json back and forth, but also because users would throw their devices out the window before they got any work done. You can also queue and batch changes in a situation like that so the requests are not only comparatively tiny, you can use fewer requests. That said, most sites definitely should not be SPAs. Use the right tool for the job


> which is obviously weighed down by the libraries, but largely is as heavy or light as you make it

One thing which surprised me at a recent job was that even what I consider to be a large bundle size (2MB) didn't have much of an effect on page load time. I was going to look into bundle splitting (because that included things like a charting library that was only used in a small subsection of the app). But in the end I didn't bother because I got page loads fast (~600ms) without it.

What did make a huge different was cutting down the number of HTTP requests that the app made on load (and making sure that they weren't serialised). Our app was originally going auth by communicating with Firebase Auth directly from the client, and that was terrible for performance because that request was quite slow (most of second!) and blocked everything else. I created an all-in-one auth endpoint that would check the user's auth and send back initial user and app configuration data in one ~50ms request and suddenly the app was fast.


In many cases, like satellite Internet access or spotty mobile service, for sure. But if you have low bandwidth but fast response times, that 2mb is murder and the big pile o requests is NBD.If you have slow response times but good throughput, the 2MB is NBD but the requests are murder.

An extreme and outdated example, but back when cable modems first became available, online FPS players were astonished to see how much better the ping times were for many dial up players. If you were downloading a floppy disk of information, the cable modem user would obviously blow them away, but their round trip time sucked!

Like if you're on a totally reliable but low throughput LTE connection, the requests are NBD but the download is terrible. If you're on spotty 5g service, it's probably the opposite. If you're on, like, a heavily deprioritized MVNO with a slower device, they both super suck.

It's not like optimization is free though, which is why it's important to have a solid UX research phase to get data on who is going to use it, and what their use case is.


My experience agrees with this comment – I’m not sure why web browsers seem to frequently get hung up on only some Http requests at times, unrelated to the actual network conditions. Ie: in the browser the HTTP request is timing out or in a blocked state and hasn’t even reached the network layer when this occurs. (Not sure if I should be pointing the finger here at the browser or the underlying OS). However, when testing slow / stalled loading issues, this (the browser itself) is frequently one of the culprits- however, this issue I am referring to even further reinforces the article/sentiments on this HN thread (cut down on the number of requests / bloat, and this issue too can be avoided.)


If if the request itself hasn't reached the network layer but is having a networky feeling hang, I'd look into DNS. It's network dependent but handled by the system so it wouldn't show up in your web app requests. I'm sure there's a way to profile this directly but unless I had to do it all the time I'd probably just fire up wireshark.


Chrome has a built-in, hard coded limit of six (6) concurrent requests. Once you have that many in flight, any subsequent requests will be kept in queue.

Now take a good, hard look at the number of individual resources your application's page includes. Every tracker, analytics crapware, etc. gets in that queue. So do all the requests they generate. And the software you wrote is even slower to load because marketing insisted that they must have their packages loading at the top of the page.

Welcome to hell.


Can you point me to a decently complex front end app, written by a small team, that is well written? I’ve seen one, Linear, but I’m interested to see more


An SPA I use not infrequently is the online catalog on https://segor.de (it's a small store for electronics components). When you open it, it downloads the entire catalog, some tens of MB of I-guess-it-predates-JSON, and then all navigation and filtering is local and very fast.


Having written a fair amount of SPA and similar I can confirm that it is actually possible to just write some JavaScript that does fairly complicated jobs without the whole thing ballooning into the MB space. I should say that I could write a fairly feature-rich chat-app in say 500 kB of JS, then minified and compressed it would be more like 50 kB on the wire.

How my "colleagues" manage to get to 20 MB is a bit of mystery.


> How my "colleagues" manage to get to 20 MB is a bit of mystery.

More often than not (and wittingly or not) it is effectively by using javascript to build a browser-inside-the-browser, Russian doll style, for the purposes of tracking users' behavior and undermining privacy.

Modern "javascript frameworks" do this all by default with just a few clicks.


There's quite some space between "100% no JS" and "full SPA"; many applications are mostly backend template-driven, but use JS async loads for some things where it makes sense. The vote buttons on Hacker News is a good example.

I agree a lot of full SPAs are poorly done, but some are do work well. FastMail is an example of a SPA done well.

The reason many SPAs are slower is just latency; traditional template-driven is:

- You send request

- Backend takes time to process that

- Sends all the data in one go.

- Browser renders it.

But full SPA is:

- You send request

- You get a stub template which loads JS.

- You load the JS.

- You parse the JS.

- You send some number of requests to get some JSON data, this can be anything from 1 to 10 depending on how it was written. Sometimes it's even serial (e.g. request 1 needs to complete, then uses some part of that to send request 2, and then that needs to finish).

- Your JS parses that and converts it to HTML.

- It injects that in your DOM.

- Browser renders it.

There are ways to make that faster, but many don't.


However, getting 6.4KB of data (just tested on my blog) or 60KB of data (a git.sr.ht repository with a README.md and a PNG) is way better than getting 20MB of frameworks in the first place.


Yes. It's inexcusable that text and images and video pulls in megabytes of dependencies from dozens of domains. It's wasteful on every front: network, battery, and it's also SLOW.


The crap is that even themes for static site generators like mkdocs link resources from cloudflare rather than including them in the theme.

For typedload I've had to use wget+sed to get rid of that crap after recompiling the website.

https://codeberg.org/ltworf/typedload/src/branch/master/Make...


Yeah, but your blog is not a full featured chat system with integrated audio and video calling, strapped on top of a document format.

There are a few architectural/policy problems in web browsers that cause this kind of expansion:

1. Browsers can update large binaries asynchronously (=instant from the user's perspective) but this feature is only very recently available to web apps via obscure caching headers and most people don't know it exists yet/frameworks don't use it.

2. Large download sizes tend to come from frameworks that are featureful and thus widely used. Browsers could allow them to be cached but don't because they're over-aggressive at shutting down theoretical privacy problems, i.e. the browser is afraid that if one site learns you used another site that uses React, that's a privacy leak. A reasonable solution would be to let HTTP responses opt in to being put in the global cache rather than a partitioned cache, that way sites could share frameworks and they'd stay hot in the cache and not have to be downloaded. But browsers compete to satisfy a very noisy minority of people obsessed with "privacy" in the abstract, and don't want to do anything that could kick up a fuss. So every site gets a partitioned cache and things are slow.

3. Browsers often ignore trends in web development. React style vdom diffing could be offered by browsers themselves, where it'd be faster and shipped with browser updates, but it isn't so lots of websites ship it themselves over and over. I think the SCIter embedded browser actually does do this. CSS is a very inefficient way to represent styling logic which is why web devs write dialects like sass that are more compact, but browsers don't adopt it.

I think at some pretty foundational level the way this stuff works architecturally is wrong. The web needs a much more modular approach and most JS libraries should be handled more like libraries are in desktop apps. The browser is basically an OS already anyway.


> CSS is a very inefficient way to represent styling logic which is why web devs write dialects like sass that are more compact, but browsers don't adopt it.

I don't know exactly which features you are referring to, but you may have noticed that CSS has adopted native nesting, very similarly to Sass, but few sites actually use it. Functions and mixins are similar compactness/convenience topics being worked on by the CSSWG.

(Disclosure: I work on style in a browser team)


I hadn't noticed and I guess this is part of the problem. Sorry this post turned into a bit of a rant but I wrote it now.

When it was decided that HTML shouldn't be versioned anymore it became impossible for anyone who isn't a full time and very conscientious web dev to keep up. Versions are a signal, they say "pay attention please, here is a nice blog post telling you the most important things you need to know". If once a year there was a new version of HTML I could take the time to spend thirty minutes reading what's new and feel like I'm at least aware of what I should learn next. But I'm not a full time web dev, the web platform changes constantly, sometimes changes appear and then get rolled back, and everyone has long since plastered over the core with transpilers and other layers anyway. Additionally there doesn't seem to be any concept of deprecating stuff, so it all just piles up like a mound of high school homework that never shrinks.

It's one of the reasons I've come to really dislike CSS and HTML in general (no offense to your work, it's not the browser implementations that are painful). Every time I try to work out how to get a particular effect it turns out that there's now five different alternatives, and because HTML isn't versioned and web pages / search results aren't strongly dated, it can be tough to even figure out what the modern way to do it is at all. Dev tools just make you even more confused because you start typing what you think you remember and now discover there are a dozen properties with very similar names, none of which seem to have any effect. Mistakes don't yield errors, it just silently does either nothing or the wrong thing. Everything turns into trial-and-error, plus fixing mobile always seems to break desktop or vice-versa for reasons that are hard to understand.

Oh and then there's magic like Tailwind. Gah.

I've been writing HTML since before CSS existed, but feel like CSS has become basically non-discoverable by this point. It's understandable why neither Jetpack Compose nor SwiftUI decided to adopt it, even whilst being heavily inspired by React. The CSS dialect in JavaFX I find much easier to understand than web CSS, partly because it's smaller and partly because it doesn't try to handle layout. The way it interacts with components is also more logical.


You may be interested in the Baseline initiative, then. (https://web.dev/baseline/2024)


That does look useful, thanks!


> Oh and then there's magic like Tailwind. Gah.

I'm not sure why Tailwind is magic. It's just a bunch of predefined classes at its core.


The privacy issues aren't just hypothetical, but that aside, that caching model unfortunately doesn't mesh well with modern webdev. It requires all dependencies to be shipped in full, no tree shaking to only include the needed functions. And separately as individual files.. and for people to actually stick to the same versions of dependencies


Can you show some real sites that were mounting such attacks using libraries?


Also wonder how many savings are still possible with a more efficient HTML/CSS/JS binary representation. Text is low tech and all but it still hurts to waste so many octets for such a relatively low amount of possible symbols.

Applies to all formal languages actually. 2^(8x20x10^6) ~= 2x10^48164799 is such a ridiculously large space...


The generalisation of this concept is what I like call the "kilobyte" rule.

A typical web page of text on a screen is about a kilobyte. Sure, you can pack more in with fine print, and obviously additional data is required to represent the styling, but the actual text is about 1 kb.

If you've sent 20 MB, then that is 20,000x more data than what was displayed on the screen.

Worse still, an uncompressed 4K still image is only 23.7 megabytes. At some point you might be better off doing "server side rendering" with a GPU instead of sending more JavaScript!


> "server side rendering" with a GPU instead of sending more JavaScript

Some 7~10 years ago I remember I saw somewhere (maybe here on HN) a website which did exactly this: you gave it an URL - it downloaded a webpage with all its resources, rendered and screenshot'ed it (probably in headless Chrome or something), and compared size of png screenshot versus size of webpage with all its resources.

For many popular websites, png screenshot of a page indeed was several times less than webpage itself!


I read epubs, and they’re mostly html and css files zipped. The whole book usually comes under a MB if there’s not a lot of big pictures. Then you come across a website and for just an article you have to download tens of MBs. Disable JavaScript and the website is broken.


If your server renders the image as text we'll be right back down towards a kilobyte again. See https://www.brow.sh/


Soo.. there should be a standardized web API for page content. And suddenly... gopher (with embedded media/widgets).


Shouldn’t HTTP compression reap most of the benefits of this approach for bigger pages?



Surely you're aware of gzip encoding on the wire for http right?


Sure, would be interesting to know how it would fare against purpose-made compression under real world conditions still...


gzip is fast. And it was made for real world conditions.


Brotli is a better example, as it was literally purpose made for HTML/CSS/JS. It is now supported basically everywhere for HTTP compression, and uses a huge custom dictionary (about 120KB) that was trained on simulated web traffic.

You can even swap in your own shared dictionary with a HTTP header, but trying to make your own dictionary specific to your content is a fool’s errand, you’ll never amortize the cost in total bits.

What you CAN do with shared dictionaries though, is delta updates.

https://developer.chrome.com/blog/shared-dictionary-compress... https://news.ycombinator.com/item?id=39615198


Is it supported by curl or any browser or no?


Every browser, curl yes.


False dichotomy, with what is likely extreme hyperbole on the JS side. Are there actual sites that ship 20 MB, or even 5 MB or more, of frameworks? One can fit a lot of useful functionality in 100 KB or less of JS< especially minified and gzipped.


Well, I'm working right now so let me check our daily "productivity" sites (with an adblocker installed):

  - Google Mail: Inbox is ~18MB (~6MB Compressed). Of that, 2.5MB is CSS (!) and the rest is mostly JS
  - Google Calendar: 30% lower, but more or less the same proportions
  - Confluence: Home is ~32MB (~5MB Comp.). There's easily 20MB of Javascript and at least 5MB of JSON. 
  - Jira: Home is ~35MB (~7MB compressed). I see more than 25MB of Javascript
  - Google Cloud Console: 30MB (~7MB Comp.). I see at least 16MB of JS
  - AWS Console: 18MB (~4MB Comp.). I think it's at least 12MB of JS
  - New Relic: 14MB (~3MB Comp.). 11MB of JS.
    This is funny because even being way more data heavy than the rest, its weight is way lower.
  - Wiz: 23MB (~6MB Comp.) 10MB of JS and 10MB of CSS
  - Slack: 60MB (~13MB Compressed). Of that, 48MB of JS. No joke.


I sometimes wish I could spare the time just to tear into something like that Slack number and figure out what it is all doing in there.

Javascript should even generally be fairly efficient in terms of bytes/capability. Run a basic minimizer on it and compress it and you should be looking at something approaching optimal for what is being done. For instance, a variable reference can amortize down to less than one byte, unlike compiled code where it ends up 8 bytes (64 bits) at the drop of a hat. Imagine how much assembler "a.b=c.d(e)" can compile into to, in what is likely represented in less compressed space than a single 64-bit integer in a compiled language.

Yet it still seems like we need 3 megabytes of minified, compressed Javascript on the modern web just to clear our throats. It's kind of bizarre, really.


js developers had this idea of "1 function = 1 library" for a really long time, and "NEVER REIMPLEMENT ANYTHING". So they will go and import a library instead of writing a 5 line function, because that's somehow more maintainable in their mind.

Then of course every library is allowed to pin its own dependencies. So you can have 15 different versions of the same thing, so they can change API at will.

I poked around some electron applications.

I've found .h files from openssl, executables for other operating systems, megabytes of large image files that were for some example webpage, in the documentation of one project. They literally have no idea what's in there at all.


That's a good question. I just launched Slack and took a look. Basically: it's doing everything. There's no specialization whatsoever. It's like a desktop app you redownload on every "boot".

You talk about minification. The JS isn't minified much. Variable names are single letter, but property names and more aren't renamed, formatting isn't removed. I guess the minifier can't touch property names because it doesn't know what might get turned into JSON or not.

There's plenty of logging and span tracing strings as well. Lots of code like this:

            _n.meta = {
                name: "createThunk",
                key: "createThunkaddEphemeralMessageSideEffectHandler",
                description: "addEphemeralMessageSideEffect side effect handler"
            };
The JS is completely generic. In many places there are if statements that branch on all languages Slack was translated into. I see checks in there for whether localStorage exists, even though the browser told the server what version it is when the page was loaded. There are many checks and branches for experiments, whether the company is in trial mode, whether the code is executing in Electron, whether this is GovSlack. These combinations could have been compiled server side to a more minimal set of modules but perhaps it's too hard to do that with their JS setup.

Everything appears compiled using a coroutines framework, which adds some bloat. Not sure why they aren't using native async/await but maybe it's related to not being specialized based on execution environment.

Shooting from the hip, the learnings I'd take from this are:

1. There's a ton of low hanging fruit. A language toolchain that was more static and had more insight into what was being done where could minify much more aggressively.

2. Frameworks that could compile and optimize with way more server-side constants would strip away a lot of stuff.

3. Encoding logs/span labels as message numbers+interpolated strings would help a lot. Of course the code has to be debuggable but hopefully, not on every single user's computer.

4. Demand loading of features could surely be more aggressive.

But Slack is very popular and successful without all that, so they're probably right not to over-focus on this stuff. Especially for corporate users on corporate networks does anyone really care? Their competition is Teams after all.


This is mind blowing to me. I expect that the majority of any application will be the assets and content. And megabytes of CSS is something I can't imagine. Not the least for what it implies about the DOM structure of the site. Just, what!? Wow.


too much crap holy and this is worse case scenario with adblock


I just tried some websites:

    - https://web.whatsapp.com 11.12MB compressed / 26.17MB real.
    - https://www.arstechnica.com 8.82MB compressed / 16.92MB real.
    - httsp://www.reddit.com 2.33MB compressed / 5.22 MB real.
    - https://www.trello.com (logged in) 2.50MB compressed / 10.40MB real.
    - https://www.notion.so (logged out) 5.20MB compressed / 11.65MB real.
    - https://www.notion.so (logged in) 19.21MB compressed / 34.97MB real.


Well, in TFA, if you re-read the section labeled "Detailed, Real-world Example" yes, that is exactly what the person was encountering. So no hyperbole at all actually.


I agree with adding very little JavaScript, say 1kB https://instant.page/ to make it snappier.


I'm getting almost 2MB (5MB uncompressed) just for a google search.


Getting at least “n”kb of html with content in it that you can look at in the interim is better than getting the same amount of framework code.

SPA’s also have a terrible habit of not behaving well after being left alone for a while. Nothing like coming back to a blank page and having it try to redownload the world to show you 3kb of text, because we stopped running the VM a week ago.


Here’s something that’s not true: in js, the first link you click navigates you. In the browser, clicking a second link cancels the first one and navigates to the second one.

GitHub annoys the fuck out of me with this.


Yeah, right. GitHub migrated from serving static sites to displaying everything dynamically and it’s basically unusable nowadays. Unbelievably long load times, frustratingly unresponsive, and that’s on my top spec m1 MacBook Pro connected to a router with fiber connection.

Let’s not kid ourselves, no matter how many fancy features, splitting, optimizing, whatever you do, JS webapps may be an upgrade for developers, they’re a huge downgrade for users in all aspects


Every time I click a link in GitHub, and watch their _stupid_ SPA “my internal loading bar is better than yours” I despair.

It’s never faster than simply reloading the page. I don’t know what they were thinking, but they shouldn’t have.


I have an instance of Forgejo and it’s so snappy. Even though I’m the only user, but the server is only 2GB, 2vcores with other services present. Itp

On the other side, Gitlab doesn’t work with JS disabled.


Between massacring the UX and copilot I've more or less stopped engaging with github. I got tempted the other day to comment on an issue and it turns out the brain trust over at Microsoft broke threaded comment replies. They still haven't fixed keyboard navigation in their bullshit text widget.

I could put up with the glacial performance if it actually worked in the first place, but apparently adding whiz bang "AI" features is the only thing that matters these days.

The whole thing smacks of a rewrite so someone could get a bonus and/or promotion.


Surprisingly my experience of GitLab is even worse! How's yours? BitBucket wasn't much better from memory either. Seems like most commercial offerings in this spaces suck.


I've been using Sourcehut. I respect Drew's commitment to open source, but I think that a lot of the UX misses the mark. For most things I really don't want an email based work flow and some pieces feel a bit disjointed. Overall though it has most of the features I want, and dramatically less bullshit than Github.


Loading an entire page with cached pictures is more or less instant, connection wise, though.


In my experience page weight isn't usually the biggest issue. On unreliable connections you'll often get decent bandwidth when you can get through. It's applications that expect to be able to multiple HTTP requests sequentially and don't deal well with some succeeding and failing (or just network failures in general) that are the most problematic.

If I can retry a failed a network request that's fine. If I have to restart the entire flow when I get a failure that's unusable.


When used well, JS will improve the experience especially for high-latency low bandwidth users. Not doing full page refreshes for example, or not loading all data at once.

So no, "no JS at all" is not "by far the lightest weight" in many cases. This is just uncritically repeating dogma. Even 5K to 20K of JS can significantly increase performance.


I always ask people to give example of real world SPAs where JS is "used well" and nobody could give me an example


FastMail is pretty good; that's my go-to example.

However, you don't need to go full SPA. "No JS at all" and "SPA" are not the only options that exist. See my other comment: https://news.ycombinator.com/item?id=40541555

Sites like Hacker News, Stack Overflow, old.reddit.com, and many more greatly benefit from JS. I made GoatCounter tons faster with JS as well: rendering 8 charts on the server can be slow. It uses a "hybrid approach" where it renders only the first one on the server, sends the HTML, and then sends the rest later over a websocket. That gives the best of both: fast initial load without too much waiting, and most of the time you don't even notice the rest loads later.


Very true, Javascript was never meant to be mandatory for web pages.

Two of the lighter options right now though seem to be things like alpinejs, htmx, etc. Basic building blocks where / if needed.


No JS can actually increase roundtrips in some cases, and that's a problem if you're latency-bound and not necessarily speed-bound.

Imagine a Reddit or HN style UI with upvote and downvote buttons on each comment. If you have no JS, you have to reload the page every time one of the buttons is clicked. This takes a lot of time and a lot of packets.

If you have an offline-first SPA, you can queue the upvotes up and send them to the server when possible, with no impact on the UI. If you do this well, you can even make them survive prolonged internet dropouts (think being on a subway). Just save all incomplete voting actions to local storage, and then try re-submmitting them when you get internet access.


It's not always the application itself per se. It's the various / numerour marketing, analytics or (sometimes) ad-serving scripts. These third party vendors aren't often performance minded. They could be. They should be.


And the insistence on pushing everything into JS instead of just serving the content. So you’ve got to wait for the skeleton to dl, then the JS, which’ll take its sweet time, just to then(usually blindly) make half a dozen _more_requests back out, to grab JSON, which it’ll then convert into html and eventually show you. Eventually.


Yup. There's definitely too much unnecessary complexity in tech and too much over-design in presentation. Applications, I understand. Interactions and experience can get complicated and nuanced. But serving plain ol' content? To a small screen? Why has that been made into rocket science?


Well grabbing json isn't that bad.

I made a CLI for ultimateguitar (https://packages.debian.org/sid/ultimateultimateguitar) that works by grabbing the json :D


I've not so much good vs not so bad vs bad. It's more necessary vs unnecessary. There's also "just because you can, doesn't mean you should."


I live in a well connected city, but my work only pays for other continent based Virtual Machines so most of my projects end up "fast" but latency bound, it's been an interesting exercise of minimizing pointless roundtrips in a technology that expects you to use them for everyting


Tried multiple VPNs in China and finally rolled my own obfuscation layer for Wireshark. A quick search revealed there are multiple similar projects on GitHub, but I guess the problem is once they get some visibility, they don't work that well anymore. I'm still getting between 1 and 10mbit/s (mostly depending on time of day) and pretty much no connectivity issues.


Wireguard?


Haha yes, thanks. I used Wireshark extensively the past days to debug a weird http/2 issue so I guess that messed me up a bit ;)


I do that too looking stuff up.


Tbh, developers just need to test their site with existing tools or just try leaving the office. My cellular data reception in Germany in a major city sucks in a lot of spots. I experience sites not loading or breaking every single day.


developers shouldn't be given those ultra performant machines. They can have a performant build server :D


>A lot of this resonates. I'm not in Antartica, I'm in Beijing, but still struggle with the internet.

Not even that, with outer space travel, we all need to build for very slow internet and long latency. Devs do need to time-travel back to 2005.


I'm sure this is not what you meant but made me lol anyways: sv techbros would sooner plan for "outer space internet" than give a shit about the billions of people with bad internet and/or a phone older than 5 years.


Sounds like what would benefit you is a HTMX approach to the web.


What about plain HTML & CSS for all the websites where this approach is sufficient? Then apply HTMX or any other approach for the few websites that are and need to be dynamic.


That is exactly what htmx is and does. Everything is rendered server side and sections of the page that you need to be dynamic and respond to clicks to fetch more data have some added attributes


I see two differences: (1) the software stack on the server side and (2) I guess there is JS to be sent to the client side for HTMX support(?). Both those things make a difference.


The size of HTMX compressed is 10kb and very rarely changes which means it can stay in your cache for a very long time.


I'm embedded so I don't much about web stuff but sometimes I create dashboards to monitor services just for our team, tganks for introducing me to htmx. I do think html+css should be used for anything that is a document or static for longer than a typical view lasts. Arxiv is leaning towards HTML+css vs latex in acknowledgement that paper is no longer how "papers" are read. And on the other end, eBay works really well with no js right up until you get to an item's page, where it breaks. If ebay can work without js, almost anything that isn't monitoring and visualizing constant data (last few minutes of a bid, or telemetry from an embedded sensor) can work without js. I don't understand how amazon.com has gotten so slow and clunky for instance.

I have been using wasm and webgpu for visualization, partly to offload any burden from the embedded device to be monitored, but that could always be a third machine. Htmx says it supports websockets, is there a good way to have it eat a stream and plot data as telemetry, or is that time for a new tool?


You would have to replace the whole graph everytime. Probably works if it updates once per minute. But more than that it might be time to look at some small js plot library to update the graph.


It sounds like GP would benefit from satellite internet bypassing the firewall, but I don't know how hard the Chinese government works to crack down on that loophole.


I hear you on frontend-only react. But hopefully the newer React Server Components are helping? They just send HTML over the wire (right?)


The problem isn't in what is being sent over the wire - it's in the request lifecycle.

When it comes to static HTML, the browser will just slowly grind along, showing the user what it is doing. It'll incrementally render the response as it comes in. Can't download CSS or images? No big deal, you can still read text. Timeouts? Not a thing.

Even if your Javascript framework is rendering HTML chunks on the server, it's still essentially hijacking the entire request. You'll have some button in your app, which fires off a request when clicked. But it's now up to the individual developer to properly implement things like progress bars/spinners, timouts, retries, and all the rest the browser normally handles for you.

They never get this right. Often you're stuck with an app which will give absolutely zero feedback on user action, only updating the UI when the response has been received. Request failed? Sorry, gotta F5 that app because you're now stuck in an invalid state!


Yep. I’m a JS dev who gets offended when people complain about JS-sites being slower because there’s zero technical reason why interactions should be slower. I honestly suspect a large part of it is that people don’t expect clicking a button to take 300ms and so they feel like the website must be poorly programmed. Whereas if they click a link and it takes 300ms to load a new version of the page they have no ill-will towards the developer because they’re used to 300ms page loads. Both interactions take 300ms but one uses the browser’s native loading UI and the other uses the webpage’s custom loading UI, making the webpage feel slow.

This isn’t to exonerate SPAs, but I don’t think it helps to talk about it as a “JavaScript” problem because it’s really a user experience problem.


Yes, server-rendering definitely helps, though I have suspicions about its compiled outputs still being very heavy. There's also a lot of CSS frameworks that have an inline-first paradigm meaning there's no saving for the browser in downloading a single stylesheet. But I'm not sure about that.


Yes, though server side rendering is everything but a new thing in the react world. NextJS, Remix, Astro and many other frameworks and approaches exist (and have done so for at least five years) to make sure pages are small and efficient to load.


The amount of complexity to generate HTML/JS is a little staggering sometimes for the majority of simple use cases.

Using Facebook level architectures for actually pretty basic needs can be like hitting an ant-sized problem with a sledgehammer and wondering why the sledgehammer is so heavy and awkward to swing for little things.


Devs only build for the requirements they are given.

You want performance? Then include it in the requirements and give it the necessary time budget in a project.


Eh, I'm a few miles from NYC and have the misfortune of being a comcast/xfinity customer and my packetloss to my webserver is sometimes so bad it takes a full minute to load pages.

I take that time to clean a little, make a coffee, you know sometimes you gotta take a break and breathe. Life has gotten too fast and too busy and we all need a few reminders to slow down and enjoy the view. Thanks xfinity!


> It all ends up meaning that, even if I get a connection, it's not stable

I feel you, but the experience can actually be very good if you invest a bit of time looking at Shadowsocks/V2Ray and building your own infra.


Chrome dev tools offer a "slow 3G" and a "fast 3G". Slow 3G?

With fresh cache on "slow 3G", my site _works_, but has 5-8 second page loads. Would you have consider that usable/sufficient, or pretty awful?


It depends if you are ok with 1/3 to 2/3rds of your visitors bouncing due to loading times and losing 3 to 5x of convertion rate depending on sources...


No need to be sarcastic. What page load speed at "slow 3G" speeds do you believe is necessary to avoid that?

(I work for a non-profit cultural heritage organization, we don't really have "conversions")


I didn't mean this sarcastically, it is a decision and may not apply to all situations.

You can see these kinds of differences with just a few seconds difference, ideally I aim to stay under 2s, even on the slowest connection type. 2s is already very long for a user to wait and many will not.

Non profits are tricky. You could see volunteer sign ups and donations as conversions. I manage a non profit site as well and unfortunately I don't have a good solution that is both fast and approachable for our staff to use, so we had to make that compromise as well.


Is tor not viable?


Or maybe, just get rid of the firewall. I am all for nimble tech, but enabling the Chinese government is not very high on my to-do list.


Please understand that Chinese government wants to block "outside" web services to Chinese residents, and Chinese residents want to access those services. So if the service itself decides to deny access from China, it's actually helping the Chinese government.


Whether you like it or not, over 15% of the world's population lives in China.


Are you a citizen of China, or move there for work/education/research?

Anyway, this is very unrelated, but I'm in the USA and have been trying to sign up for the official learning center for CAXA 3D Solid Modeling (I believe it's the same program as IronCAD, but CAXA 3D in China seems to have 1000x more educational videos and Training on the software) and I can't for the life of me figure out how to get the WeChat/SMS login system they use to work to be able to access the training videos. Is it just impossible for a USA phone number to receive direct SMS website messages from a mainland China website to establish accounts? Seems like every website uses SMS message verification instead of letting me sign up with an email.


I guess they should fix their government, then.


What an incredibly naive and dismissive thing to say.


It isn't naive, and isn't dismissive.

The problem is the CCP.

The only fix is for the people to rise up against them.

This doesn't even have to be violent. Most of the former Soviet Block governments fell without any bloodshed.

What's the alternative? Wait for Xi to "make his mark on history" in the same way that Putin is doing in Ukraine because it's "naive and dismissive" to even talk about unseating him?


It is always so funny to read Americans or Western Europeans saying "just overthrow your dictator bro". Usually told by people who never faced any political violence, or any violence for that matter.

I was born and live in the ex-Soviet country, and stating that Soviet governments fell without any bloodshed is a proof of ignorance.


By 2017 Xi Jinping already had six failed assassination attempts against him, which prompted him to perform a large-scale purge within the ranks of the CCP.

If it was all that easy, it would have been done a long time ago.


> Most of the former Soviet Block governments fell without any bloodshed.

That was Gorbachev. Most leaders of any country would roll tanks.


Gorbachev sent the tanks rolling in Lithuania (https://en.wikipedia.org/wiki/January_Events).


And uncensored websites that function through the great firewall would help organize that government fixing.


I mean, you're not wrong. But if you happen to not be in a position to overthrow the government, maybe the next best thing can be a more realistic approach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: