The system’s been hijacked. The craft of real engineering—building sharp, efficient, elegant systems—is buried under layers of corporate sludge. It’s not about progress anymore; it’s about control. Turning users into cattle, draining every last byte, every last cent. Yeah, it sounds dramatic, but look around—we’ve already lost so much.
I’m running 24 threads at 5GHz, sipping data through a GB pipe, and somehow, sites still crawl. Apps hesitate like they need divine intervention just to start. Five billion instructions just to boot up? My 5GB/s NVMe too sluggish for a few megabytes of binary? What the hell happened?
The internet isn’t just bloated—it’s an executioner. It’s strangling new hardware, and the old hardware? That’s been dead for years. Sure, you can run a current through a corpse and watch it twitch, but that doesn’t mean it’s alive.
> The craft of real engineering—building sharp, efficient, elegant systems—is buried under layers of corporate sludge
No. It is buried under the laziness of build-fast, optimize-next. Except optimizing never comes. Building fast requires lightweight development on heavyweight architectures. And that means using bloats like JS frameworks.
If it takes a programmer an hour to optimize something that saves a second each time it is run, management thinks that is a complete waste since you have to run it 3600 times to 'break even'.
You might think that their thinking would change when you point out that the code is run millions of times each day on computers all over the world, so those saved seconds will really add up.
But all those millions of saved seconds do not affect the company's bottom line. Other people reap the benefits of the saved time (and power usage, and heat generated, and ...) but not the company that wrote the code. So it is still a complete waste in their minds.
Multiply this thinking across millions of programs and you get today's situation of slow, bloated code everywhere.
As improvements to manufacturing tech and CPU designs become unable to deliver the improvements that they used to, the cost of computer time will approach the cost of programmer time. As they converge and possibly flip, optimizations will become more useful (and required) to produce the gains we've become accustomed to. I'm not sure how many years away that is.
I agree. Hardware improvement will hit a wall at some point. From then on, all performance improvement will have to come from software optimization. I do wonder if AI will allow that to happen much quicker.
Could an AI just crawl through an organization's entire code and make optimization recommendations, which are then reviewed by a human? It seems like there could be a lot of low hanging fruit.
No, it is the result of using user's computer to run your ad programm. Noone gives a shit about javascript, as long as it runs on someone else's computer.
Except back when we didn't program like this, it didn't take that much longer. It's the result of shitty technology stacks, like the archetypical Electron. We used to make things right and small at the same time.
Even Electron probably could have been fine if the browser was just a document layout engine and not a miniature operating system. There was an article going around a few years ago about Chrome - and by extension, Electron - including a driver for some silly device I don't remember, like an Xbox controller or something. Googling tells me it wasn't an Xbox controller though. Every electron app includes an entire operating system, including the parts not needed by that app, including the parts already included in the operating system you already have.
Language runtimes don't have to be this way, but we choose to use the ones that are!
> We used to make things right and small at the same time.
Memory is infinite, CPU is infinite, disk space - we don't give a shit because they are all on the sucker's computer.
Just like "your privacy" (aka your data) is very important for us, also your computing power is very important for us.
I wish i was sarcastic.
100%. I had my first obviously AI-written email the other day, and that was one of the clear tells.
I was trying to figure out what made it so obvious, the dashes were one thing, the other things I noticed were:
- Bits of the text were in bold.
- The tone was horrible, very cringe. Full of superlatives, adjectives and cliches.
- I live somewhere where English is the third language, most people don't write emails in English without a few spelling or grammar mistakes.
- Nor do they write in paragraphs.
- It's also pretty unusual to get a quick reply.
Lots of these things are positive, I guess. I'm glad folks are finding it easier to communicate quickly and relatively effectively across languages, but it did kinda make me never want to do business with them again.
On Linux (maybe only certain distros, not sure) the keys are different, but you can enable a Compose key and enable special character keybinds as well.
For example on Mint en–dash is "<compose> <minus> <minus> <period>" and em—dash is "<compose> <minus> <minus> <minus>"
I do a similar thing on Linux with a feature called the "Compose Key". I press the compose key (caps-lock on my keyboard), and then the next couple of keypresses translate into the proper character. "a -> ä, ~n -> ñ, etc.
Are you actually measuring the load-time bottlenecks in devTools?
I don't know the exact details but it appears a lot of sites are sitting around waiting on ad-tech to get delivered before they finish loading content.
great, now I have to un-learn using 'proper' punctuation.
It's AltGr+[-] (–) or AltGr+Shift+[-] (—) on my keyboard layout (linux, de-nodeadkeys) btw.
AltGr+[,] is ·, AltGr+Shift+[,] is ×, AltGr+[.] is …, AltGr+[1] is ¹
[Compose][~][~] is ≈, [Compose][_][2] is ₂ (great for writing H₂O), [Compose][<][_] is ≤, etc.
I use all of these, and more, guess I'm an AI now :(
To be fair I intentionally use — incorrectly by putting spaces around it, just because I hate how it looks without the spaces ("correct" English grammar says there should be no spaces.)
It provides an interesting test case for the usefulness (or lack thereof) of AI detectors:
ZeroGPT: Your Text is Human written (0% AI GPT)
QuillBot: 0% of text is likely AI
GPTZero: We are highly confident this text was ai generated. 100% Probability AI generated
Grammarly: 0% of this text appears to be AI-generated
None of them gave the actual answer (human written and AI edited), even though QuillBot has a separate category for this (Human-written & AI-refined: 0%).
this puts you on a desktop with a full-size keyboard or a laptop with a numpad then, which is a very small minority these days with a definite dev-centric skew.
Indeed I am, but the point here is that some users are actually typing em-dashes outside word processors or publishing/typesetting tools (e.g., on HN)—so it's not necessarily a sign of a message written by an AI (m-dash pun intended). The poster could as well be a developer with a full-size keyboard.
On many Android keyboards you can press and hold various keys to get access to many of the "extra" punctuation characters and fancy "foreign" letters. I imagine the same is also true on Apple phones as well.
On Linux (maybe only certain distros, not sure) the keys are different,
but you can enable a Compose key and enable special character keybinds as well.
For example on Mint:
en–dash is "<compose> <minus> <minus> <period>"
em—dash is "<compose> <minus> <minus> <minus>"
quite surprised this comment got so much debate after i immediately agreed i used chatgpt -or did i?
(u see i dont know how to punctuate , i am not so punctual!)
> I’m running 24 threads at 5GHz, sipping data through a GB pipe, and somehow, sites still crawl.
Aside from websites, we will talk about that in a minute, how is performance? I am running Windows 11 on new hardware and it is running great. I built personal box with 64gb of the fastest DDR5 and a AMD 9900xtx. The most expensive component (and least bang for the buck)... the video card. This is my first time to have an NVMe disk and its absolutely amazing.
I am running Debian 12 on mini computer with much less hardware and its doing amazing there too. I can run anything on the box except AAA games and 4k video movies.
Now, for the web talk. I was a JavaScript developer for 15 years, and yes its garbage. Most of the people doing that work have absolutely no idea what they are doing and there is no incentive to do so. The only practical goal there is to put text on screen and most of the developers doing that work struggle to do even that. Its why I won't do that work any more, because its a complete race to the bottom. If I see a job post that mentions React, Vue, or Angular I stop reading it and move on.
Where did you move from JS? I am trying my best to learn low level stuff to NOT have these situations. But frameworks are bloated already and any optimisation feels useless
I'd have just one final remark, that it really is not a engineering problem but rather a business decision.
After all; having paid gazillions to engineers and project managers to build the sludgefest, all that cash needs to be harvested back into the pockets.
not untrue. though many good projects existed because of passion for good engineering. there is much less good open source now. people want money for their time...
People need to realize that leisure time - time off work, commute, chores - is paid for by their employer (as far as they're concerned). Which is to say that those of us with only a couple of hours to ourselves a day are being stiffed, no matter how much money we make. Stop letting the workaholics dictate how the rest of us live.
People tell me all the time that I should just open source my project that I have spent thousands of hours developing. As if doing so would make money magically appear in my bank account.
It's accepted and known, but in an economy where most megacorps make their money via enshittification, the well-paid engineers who get paid to shovel the aforementioned shit down our throats don't like being reminded of their essential role.
going from 1ghz to 5ghz should make single threads go a little faster ?
IO might be a bottle neck on spinning rust, but we've come far from those days too..
Well it wouldn't be that way if it were not for all the JavaScript. If we just kept on doing server-side scripting (PHP, CGI/Perl...) and used JS only where absolutely necessary (video players, games...) like in the early 2000s, it would all work fine on 15 year old hardware. But instead we use the browser as an OS and have tons of JS on simple news sites.
I suspect DOM modelling/rendering also has an impact as I've surfed with JS off, using Firefox on an older (mid-aughts) iMac running Linux ... and can't get much past 2--4 tabs before performance becomes absolutely unacceptable.
Instrumentation of browsers to show performance constraints remains poor and/or a Black Art, so I'm somewhat winging this.
The same system performs wonderfully with scripts, non-browser based requests (e.g., wget, curl), terminal-mode browsers (usually w3m or lynx), and local applications (mailers, audio ripping / playback, GIMP, word-processing, and of course shell tools, as well as a few occasional games perhaps.
Particularly where it's not clear whether or not someone's making a joke, or is merely misinterpreting language. A common occurrence where many participants' first language is not English.
And especially where the joke itself runs into a tired political fracture.
Which is a reflection of those writing the sloppy client code in first place.
Back in the 8 and 16 bit days, companies managed to make software available across all of them, or at least the most common platforms, in a time where performance called for hand written Assembly, and each hardware platform was its own snowflake.
And yet in the age of high level programming languages, the best most folks can think of is shipping a browser alongside the application, not only they show complete disregard for the platform, they don't really care about the Web, rather ChromeOS.
And yes this includes VSCode, which has tons of C++ and Rust code, and partial use of WebGL rendering, to work around the exact point of being based on Electron.
Even having a few 'retro' systems up and running, mostly Macs from the early 2000s, I find that what we used to do on them: chatting on Skype or MSN Messenger, listening to shoutcast streams or downloading on Napster, playing Unreal Tournament online etc. are mostly defunct now. What remains are local games, clunky word processors and MP3's on the local network. It turns out to be a largely empty experience unless you really get back into Command and Conquer or Metroid.
I'm pretty heavily involved in SD video restoration and I drive things from "retro" systems mostly because I need 32-bit PCI slots and Windows XP in order to interface with older broadcast engineering hardware.
I could shoehorn everything into a more modern system, but what I have suits my needs.
Marathon rockets of fury networked. Carmegaddon and Must/Pyst. Spaceward Ho! Where were you? {Fires rocket} pathways into darkness. Loderunner, dungeons of doom, Oregon trail
I watched a friend's kid play a game on his offline iPhone. So. many. freaking. ad interruptions! Fucking tragic, I'm glad my childhood gaming wasn't like that...
For most local applications, or simple over-the-Web fetches via curl, wget, etc., mid-aughts hardware or earlier often suffices.
Amongst my hobbies are occasional large-scale scraping of websites. I'd done a significant amount of this circa 2018-19 on a mid-aughts iMac running Linux. One observation was that fetching content was considerably faster than processing it locally, not in a browser but using the html-xml-utils package on Debian. That is, DOM structures, even when simply parsed and extracted, provide a significant challenge to older hardware.
I had the option of expanding RAM, and of swapping in a solid-state drive, both of which I suspect would have helped markedly (swapping was a major factor in slowness), though how much I'm not sure.
I'll also note as I have for years that this behaviour serves advertisers and merchants as a market segmentation technique. That is, by making sites unusable on older kit, in a world where physical / real-estate based market segmentation isn't possible, is an effective way of selecting for those with discretionary income to buy modern devices. Whom we presume have greater discretionary income / higher conversion rates as well.
(Multiple network round-trip requirements is also a way to penalise those making access from distant locations, as those 100--300 ms delays add up with time, particularly for non-parallelisable fetches.)
I'm not arguing that all site devs have this in mind, but at the level which counts, specifically within major advertisers (Google, Facebook) and merchants (Amazon) this could well be an official policy which, by nature of their positions (ad brokers, browser developers, merchants, hardware suppliers) gets rippled throughout the industry. In the case of Apple, moving people to newer kit is their principle revenue model as well.
It seems to hold up pretty well for an 11 year old netbook which was quite underpowered even when it came out. The equivalent would be someone in 2014 making a video about how their Pentium IV setup from 2003 is killed by the modern internet. And actually that's a bit unfair, as Pentium IV was a premium product while this netbook was not.
What are these webpages exposed in this video supposed to do¹? Display some text, pictures and maybe some videos. How does it feels it term of complexity and hardware requirement compared to a 1998 FPS which achieved impressive gamer-experience breakthrough incorporated into a customer grade product? Does it seems more fair as a comparison?
Now, obviously you can’t expect all webdeveloper interns out there to reach the level of Valve engineers in 1998, sure. But the frameworks they are asked to use should give them the sober way as the easy path, and let more complex achievements still accessible in the remaining computational resources.
¹ As opposed to something using WebGL or other fancy things incorporated in contemporary browsers.
> A solution is needed to help those old computers
A solution could be to put another layer on top of the internet. This could be done by means of a "presentation proxy", similar to cloud gaming, e.g. based on VNC, where only a VNC client is run on the old computer, and the browser is running on the presentation proxy.
There are solid reasons to put your browser executable and storage other than on your principle desktop in the first place.
That still leaves the attack-surface of browser-based activities (do you really want your recreational activities sites interacting with your financial services?), but both gives the option for fresh respawns (OS and/or browser) and physical and network isolation of your local storage and data from your browser.
(This presumes hygiene on any browser-based downloads as well, of course.)
In general, the idea that we'd want all our data on a globally-accessible network is seeming increasingly unwise to me over time, given both technical and political developments.
Late edit: Oh, and Browsh: text-only browser-via-proxy supporting CSS/JS.
My non-tech friends call me paranoid at the basic level of protection I do - separate browsers for everything, and a couple of VMs for e.g. finances.
Vlan has always been an important part of LAN security but we're just out here with always on, full Internet access? My home firewall logs show an insane amount of bot / scraping etc.
It's also a good way of isolating proprietary applications that barf services, applications, and custom ways of keeping user state all over the system – I'm looking at you, Citrix.
That's cool, thanks for the hint. Do you know, what frame rate/latency can be achieved with this approach? I've read that the limit of VNC is about 7 fps.
Instead of using a more powerful computer to be the presentation proxy for a less powerful one, the more obvious solution would be to upgrade to a not-so-old computer directly? Alternatively, don't try to load heavy sites on hardware that is too light for it.
I don't think the problem here is the age of the system, but that it was extremely crappy system even at it's time. It's 4 watt pre-Zen AMD CPU running at 1GHz. It's a CPU intended for tablets, and even for that it's bottom of the barrel. Something like i7-4770K from the same year (2013-2014), which I recall being very popular at the time, is over order of magnitude faster. The CPU here is more comparable to CPUs from almost decade prior, the venerable Thinkpad T61 would probably perform better.
At the same time, at no point of history of computing has 10 year old PC been as useful and usable as it is today, if the PC was half-decent to begin with
Using 2015 PC in 2025 is far less painful than using 1995 PC in 2005.
My desktop is from 2012, so 13 years old so far, but is still very capable at any task I throw at it. It was originally a high-end workstation, but by 2020 was worth so little that I got it for free from someone moving out of town. Last year, upgrading the CPU to the top of the line part that fit the motherboard socket cost $17 (versus an original MSRP of $2300), and upgrading it to 128GB of RAM cost $40.
When even top-of-the-line older hardware is nearly free, it makes little sense to optimize for bottom-of-the-line older hardware.
It does very well on any modern internet task, as well as playing modern video games with a few-year-old used graphics card.
I feel Sandy Bridge (2011) and Haswell (2013) were major turning points. Haswell is especially significant because it forms baseline for x86-64-v3, which e.g. RHEL and others are migrating towards: https://developers.redhat.com/articles/2024/01/02/exploring-...
That is also one of the potential problems of pre-Haswell hardware, distros might stop supporting it in near-future
Those netbooks were hot garbage when they came out. I remember trying a bunch of them, and they were all slow and shitty even when they came out. I know a few people who bought various brands, and they basically got put away in a desk and never used. Most of them had such low resolution, you couldn't even see the entirety of the Control Panel dialog boxes in Windows. They could barely play videos without shit chopping up. Web surfing was slow back then. They were horrific.
Indirect point: using an ad blocker protects the environment. Considering it also helps security (loading fewer things = fewer chances at exploits), it should really be the default.
Not only the Internet, but also its technologies. Electron apps bring older computers to their knees, and those apps are becoming ubiquitous (MS Teams, MS VS Code, Whatsapp, Signal...). Sometimes they are even labeled as "lightweight".
Here is a quick example of an application labeled as lightweight, which turns out to be a 500MB+ Electron monstrosity:
"MarkText is a lightweight, user-friendly Markdown editor that serves as a free and open-source alternative to Typora. It’s designed for everyday users who want a clean, intuitive experience."
If the application is free (with no strings attached), I would not really complain. But the main offenders are apps by large companies that have revenues in billions. The problem is that most of the userbase do not complain.
I’ve seen multiple VS Code users claim it’s lightweight and fast. And to be fair compared to many other Electron apps it is, but many editors still run circles around it.
Last time I looked at benchmarks, editors like Sublime, Emacs, gVim etc and even some IDEs had lower input latency. Zed is probably the most comparable editor that’s both faster and more power efficient.
Anecdotally on my previous laptop from 2016 it was often laggy and took longer to process a single key input than Vim took to start up and load plugins, and natively compiled editors like Emacs and Sublime tend to be noticeably snappier.
Whether that matters to you or not is subjective, but I don’t like editors pretending to be IDEs anyway.
It's not surprising that editors / IDEs supporting less features are going to be faster. For example, https://github.com/zed-industries/zed/issues/5065, according to that issue Zed doesn't support Build/Debug actions, which makes it a no-go for embedded development immediately. At that point I'd rather just use nvim with plugins.
When I had an old computer that couldn't deal with the indexing done by JetBrain's CLion or Microsoft's Visual Studio Code - I used nvim. It was a pleasant experience, however it lacked support for visual debugging (and please don't talk about GDB TUI as if it is an option). Now that I have a computer that can deal with the indexing, Visual Studio Code is just fine. In fact, it is considerably more lightweight than JetBrain's CLion IDE and is very easy to setup.
For example, https://code.visualstudio.com/docs/devcontainers/create-dev-..., allows you to setup development inside a container. In practice this allows anyone to quickly pull the repository and start working on the code, including building and debugging without having to worry about setting up toolchains or environment as it'll be done automatically for you.
> It's not surprising that editors / IDEs supporting less features are going to be faster
True, though performance is often completely unrelated to the amount of features. The build/debug actions you mentioned should not have any impact on text editing speed.
Supporting additional features generally introduces complexity. This is why people drift towards using larger platforms (e.g. Electron) to build their applications as it reduces the complexity of introducing new features. As it stands today, Zed is unusable for my use-case due to a lack of support for features I need.
I find VS Code to be unmanageable for anything beyond a medium sized project. Maybe the LSPs I use are to be blamed, but I find nvim less problematic in this regard.
Language severs as well. I had a desktop bought around 2015, with 16GB DDR3 RAM. That was quite a lot back to that time. For some reason, I used it for a while, and I needed an isolated develop environment, so I installed Debian server with Qemu/KVM, and assigned 4G RAM to it. It looked okay when starting neovim, taking about a few hundred MB, but when starting `tsc`, especially two or three seperate projects, the RAM was not enough any more. Lua language server also needs a lot of RAM.
My father in-law did web development for years, but has been retired for a while. I mentioned this to him briefly, and he said pretty nonchalantly “yeah, we were always pressured to push everything to the client to improve response times.” I’m sure there’s more to it than just that, but it was all very simple to him.
1. Using uBlock Origin and NoScript would help
2. Sorry, but the AMD A4-1200 (2 Cores running at 1.0 (!) GHz, 4W TDP, Single Channel DDR3-1066[1]) was already slow when it was new back in 2013, it was introduced by AMD as a low budget *Tablet* option
3. Regarding Video playback: As mention in the video, forcing Youtube to serve h264 should help, since the iGPU supports h264 decoding [2][3]
This also makes me wonder: What does Youtube serve on this old machines? On my old Vega 56 (UVD 7.0 h264/h265/JPEG decoding support), yt runs without issues, but on the displayed HD 8180 (UVD 4.2, h264 decoding support) it doesn't? My current system has a Nvidia 1050 and gets served AV1, which the 1050 also does not support in hardware.
My first instinct for yt's "auto" implementation would be to the serve a video in 480p and a codec the users hardware decoders supports, and if the user switches to a higher resolution, to serve it in av1 in order to preserve bandwidth. Maybe Youtube does not have access to the users hardware decoding capabilities, or they want to preserve bandwidth even at low resolutions.
I was using a netbook with a comparable CPU as a work machine around that time. Using the web with it wasn't fun back then either, on Windows or Linux. I don't think there's any point in time when these underpowered processors would've been fine to use for browsing mainstream sites.
While watching the video you can right click and choose "Stats for nerds". This opens an overlay in the top left corner showing a lot of detailed information about the video including video and audio codecs.
My favorite Firefox plugin is NoScript. There is often an incredible amount of third-party JavaScript running on commercial web sites. NoScript turns all the JavaScript off by default, and then you can whitelist it temporarily (or permanently, your choice) at the per domain level. (I will never whitelist google tag manager, how is it present on every freaking website?)
So I visit thing-i-want.com and it doesn’t load because NoScript is currently disabling JS for that domain. No problem, I temporarily enable JS for thing-i-want.com
The page refreshes and suddenly NoScript is disabling JS for 10 more domains!
That seems excessive, maybe the page doesn’t need ALL of those scripts to function.
I will enable that cloudfront domain and that one that has “static content” in the name.
Page refreshes.
Okay it mostly works now but also NoScript is showing disabled JavaScript from 5 more domains!
..Anyways
Sometimes sites are running scripts from 15 or more domains and sometimes they are nested 4 domains deep.
It’s absurd and OF COURSE it overwhelms older devices.
If you want to use a modern browser on an older device, use a browser with a script blocking plugin
NoScript as a antipattern, because it disincentivises browsing new websites. As you observed, for every new website you need to reload the page quite a few times to get it functional.
I don’t ever install NoScript on family/friend devices. It just breaks their experience. I have knowledgeable coworkers that won’t use it because it’s not worth the hassle for them.
So I agree it is an anti-pattern for typical use cases.
But if you’re trying to get the most out of old hardware, it will make some websites more usable.
GTM is just another JS CDN, like unpkg and jsDelivr and others I forget. What amuses me is sometimes a site will use all three. Often, none are necessary for the core site to work; having had to help add support for GTM once to a site builder product, I think the target demographic is PMs who want to add random marketing/ad/analytics/audience-segmented scripts to a site.
Very rarely it'll happen that I'll care enough to go through the list of possible domains to temporarily whitelist before finally giving GTM a shot, then immediately remove the whitelist. Usually I don't get that far, especially because if it hasn't worked by then, enabling GTM doesn't tend to work either, it's just a bad site that isn't actually providing what it claimed to provide. NoScript has never disincentivized me from visiting a new site, but it has made me give up on some or look for alternatives. My daily experience is pretty minimally impacted by it. (Still, I don't usually bother installing it on work machines or my travel laptop (which is remoting to my home PC most of the time anyway), and sometimes I'll just load the page up in a chromium tab (incognito or not) rather than play the game of five refreshes from whitelisting JS.)
The performance impact is quite minimal I think, especially if you compare the difference between Firefox with NoScript and Chromium without, the latter is just faster because it's not Firefox. The oldest machines I still use sometimes are from 2009 (with an i7 920, pretty good for the time) which as my old daily-driver I used NoScript, and 2017 (my travel laptop with an i7 7820HQ) where I don't bother. Neither is all that slower for web stuff than my current daily driver with a Ryzen 9 5900X. The web is just slow even with newer hardware. (In contrast to others here though, I immediately notice the difference of better hardware with local applications, especially content authoring ones like gimp or krita.)
A new release of software or technology is not adapted for its new features. New technologies help to create new security vulnerabilities which in turn force new release of the tech. It's a vicious circle where tech and hacks play a catch-up game. Old PC hardware is like Mayans or uncontacted tribe from an island. They can't tolerate getting exposed to the new world of internet.
Also, companies don't want to invest in supporting multiple versions at any point in time, and can't afford reputation risk by not forcing upgrades.
My company let's the employees to request for and get a software installed, but can hardly allow them to use the features! The Risk & compliance department wouldn't like anyone to work or use any software properly. Any moving thing is a risk.
> I mean it's not literally killing old PC hardware, but it is meaning old PCs are now becoming worthless e-waste (aka dead) because they can't browse websites like YouTube and playback videos... The internet is not designed for older hardware anymore, so it doesn't really matter what operating system you put on it, normal websites are now so multi-media rich and advert packed, that older hardware is going to struggle
Hah, posted as a comment to a YouTube video.
I know you have to go where your audience is. But like, block ads, block most JavaScript, don’t go on YouTube, and the internet is much better.
This shoves you off into a smaller sub-sector of the internet. But if you are somebody who is nostalgic for the era when the internet didn’t suck so bad, it was quite a bit smaller back then too.
I actually got an old iMac G3 connected to the internet and browsing the web. The previous owner had internet explorer on it, which seems to work better than netscape, though neither support HTTPS. I was able to download files to it using a locally hosted HTTP server on my PC, but browsing the internet with anything resembling javascript was out of the question
National libraries across the world curate books, documents, and other formats but we as Humanity unfortunately are not very good at preserving the software side of History.
Yep. Somewhere I was browsing had a dark mode theme done by using a full page invert and hue rotate but had to exempt the images from that transform. It was about as slow as you might imagine a per-pixel transform (or two) would be.
I have slow, limitted internet. I'm now blacklisting sites in my hosts file that suck up internet. Some I can't of course, I find it particularily funny that my bank has my browser ping spotify for instance. As others have said its just enshittification all around.
Agreed. Been around long enough to see u/sers from another site change the culture here subtly.
There was an interesting FB engineering post here about cost benefits to using a QLC storage layer... No comments. I feel like a lot of technical people nowadays don't even appreciate that we get FAANG engineering blogs and talking about hardware. Maybe I'm just jaded from having worked in a data center.
I’m running 24 threads at 5GHz, sipping data through a GB pipe, and somehow, sites still crawl. Apps hesitate like they need divine intervention just to start. Five billion instructions just to boot up? My 5GB/s NVMe too sluggish for a few megabytes of binary? What the hell happened?
The internet isn’t just bloated—it’s an executioner. It’s strangling new hardware, and the old hardware? That’s been dead for years. Sure, you can run a current through a corpse and watch it twitch, but that doesn’t mean it’s alive.