Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We don’t need to worry about memory efficiency until we stop getting gains via hardware improvements. For now developers can just slap a web app into some chromium based wrapper, make sure their code doesn’t have any n^2 in it and you’re good to go.


Tell that to the person on a fixed income who has to invest in an expensive new machine because their 2015 laptop (which still has a whopping 4 GB of memory and a CPU that would have been top-of-the-line twenty years ago) has become unusably slow.

Software efficiency is a serious equity and environmental issue, and I wish more people would see it that way.


This is why I argue that one of the best things that the Free/Libre software developer community can start doing is optimizing for lower spec machines. Microsoft and Apple are either too closely nit or drectly provide hardware to be prolonging the liftime of hardware they sell. In optimizing open OS's it can prolong the life time of hardware by a significant margin and it means that lower in come folks are not left in the dark. I don't just mean in well off countries, but if you are in the lower classes of the global south - there is no other option.

These was (is? - Not sure) a version of Firefox for PowerPC MaxOSX - TenFour Browser - that brought forward modern features/support of Firefox to Macs that were long past their prime. They mentioned that their favorite story in time of development was "One of my favourite reports was from a missionary in Myanmar using a beat-up G4 mini over a dialup modem; I hope he is safe during the present unrest. "

http://tenfourfox.blogspot.com/2020/04/the-end-of-tenfourfox...

This is what can happen when things are optimized for the people, not the business. This is part of why I still use a Core 2 Duo as my daily runner, if it ain't broke don't fix it.


>This is why I argue that one of the best things that the Free/Libre software developer community can start doing is optimizing for lower spec machines.

But isn't the primary application for these machines going to be the web browser, which is pulling in so much JS insanity that the web sites won't render well anyway?


Yeah, that is unfortunately a big part of it. Via NoScript I generally run the web very lean but there is only so much one can do.


It is depressing that I had this same argument in college a decade ago and people are still so cavalier about not optimizing their code.


Because the person on a 2015 budget laptop isn’t the one paying their wage to optimise apps.

Companies will invest in what pays the bills. And hyper optimising for customers with no money isn’t it.


To be fair, if you forced programmers to write efficient code you would just make everything more expensive and flood the market for unskilled labor with university graduates that can't find their own ass.


If it really did come down to that, I would still rather people had to pay more for software and less for hardware, because software has a comparatively minuscule environmental impact.


Actually no. If programmers actually learned how to properly program the machines, we'd not be in the mess we are in right now. Abstraction is the cancer that got us to where we are.

Nobody has any actual clue what they're doing, everyone keeps writing code for the compiler hoping for the best and the rest of the world has to buy new machines because the programmers of the last decades sucked.

That, btw, includes most of you people reading this. You're fucking welcome.


No need to invest into an expensive new machine; a device from 5 years ago, with some more added RAM, would already be pretty adequate. Typing this from a Thinkpad T470 which was introduced in 2017, which is my main workhorse machine.

A top-of-the-line laptop CPU from 20 years ago likely just doesn't support addressing more than 4GB or RAM. Forcing it to work on modern resource-heavy Web pages and media is like forcing a GPU from 20 years a go to run Skyrim. It's just not adequate.


20 years ago is pushing it a bit. But 12 years ago, in 2008, I used a computer with 4GB of RAM in order to:

• Read the news

• Post on social media

• Make video calls

• Use instant messaging

• Create and edit word documents/presentations/spreadsheets

Today I use my computer for all of those same things... and yet they all require drastically more memory (and CPU, GPU, etc). What happened, and how does this benefit consumers? Yeah, modern web pages are resource-heavy—but to what end†?

In some cases, the requirements really did change. For example, I can now watch videos in 4K; my 2008 computer could handle 1080p, but I imagine it wouldn't have handled 4K as well. However, I suspect many users of old machines would be perfectly happy to drop down to a lower resolution.

---

† Something I find amusing in all this... people often say they're glad Flash applets died because they were slow. Nowadays, instead of Flash, we use browser apps written in Javascript. I wonder how "slow" those apps would run if you threw them on a computer from the Flash era. (This isn't to discount other problems with Flash, although I do think it has a worse reputation than it deserves.)


You can use computer with 4 GB of ram today for all things you've mentioned. It might swap here and there and not be as snappy, but generally it'll work.

I think that Apple just recently stopped to sell 4 GB computers. And their phones from the last year sells with 4 GB RAM while being perfectly able to do all the things you've mentioned as well.


Yeah, I agree - I don't think ram is usually the problem.

I used to have a 2016 dual core macbook pro with integrated graphics and 8gb of RAM or something. The machine was great when I got it, but 18 months ago it was limping along and I finally decided to get rid of it.

And it wasn't any 3rd party apps that killed the machine. Every time the machine started up, iphotoanalysisd or some random spotlight service or something would be eating all my CPU. It was always a 1st party Apple app which was making it slow. And the graphics felt laggy. Just moving windows around felt bad a lot of the time, even when I didn't have anything open. Xcode would sometimes lag the machine so much that it would drop keystrokes while I was typing. I had RAM to spare - it was a CPU problem.

In the process of wiping the machine, I booted into Recovery mode and it booted the 2016 recovery image of macos. Holy smokes - the graphics were all wicked fast again! I spent a couple minutes just moving windows around the screen in recovery mode marvelling at how fast it felt.

I wonder if reverting to an old version of macos would have fixed my problems. As far as I can tell, this was all Apple's fault. They piled up macos with so much crap that their own computers couldn't cope with the weight. I also wonder if they broke the intel graphics drivers in some point release somewhere along the way, or they started relying on GPU features that Intel's driver only had software emulation for.

Modern macos still has all that crap - the efficiency cores in my M1 laptop are constantly spinning up for some ridiculous Apple service or something. But at least now that still leaves me with 8 P-cores for my actual work. Its ridiculous.

I bet linux would have worked great on that old laptop. I wish I tried it before turfing the machine.


Sure, it's possible to get by on 4 GB of RAM today, but it used to work a lot better!

Compare the memory usage of:

• 2008-era Skype and iChat vs Slack, Teams, and Discord.

• 2008-era web pages (including with Flash embeds) vs modern web pages.

• Microsoft Office 2007 vs current Microsoft 365.

And it's not only or even primarily memory, but also CPU requirements and so on.


While I do agree with this, it seems worse than that - I've observed with a number of systems that used to run well 5 or so years ago that they simply don't any more, even with exactly the same OS and essentially the same software. I don't know to what degree that is because of actual hardware deterioration (or least, file system fragmentation), vs additional gumpf getting automatically installed and slowing it down (but every time I've tried to remove such gumpf, it hasn't really helped), or even because of user perception (but I don't buy that this explains cases of apps that now take over 30 seconds to start up, when they used to take 5 at most). I have one 8+ year old Windows 7 machine in particular that I use for music streaming, and basically can't be used for 30 seconds at least after logging in - but then seems mostly fine after that.


"Windows Rot" is definitely a thing but it can be cleared out by doing a clean reinstall of the OS. While this can be time consuming, you'd likely be doing it anyway if you got a new machine.


No idea where I'd even find an installer for Windows 7! It does make me wonder whether upgrading it would actually help. But for now it works well enough I'd rather not risk it (the other thing I use it for is some old software that requires a FAT partition for its licensing to work!).


I have a retail copy of Windows 7 on a DVD! But yeah, if you didn't buy it back in the day I'm not sure where you'd get it now.

Windows 10 (and I assume 11) has an option to "refresh" Windows in Settings.


Expecting your 8+ year old laptop to run as well as it did when it was new is completely unreasonable.

That has never been a reasonable expectation in the history of computing.


Why? Are the types of things I want that laptop to do different today than they were 8 years ago? Sure, apps and websites are heavier, but I'd posit the things most people do on their computers haven't changed in a decade at least.

> That has never been a reasonable expectation in the history of computing.

Yes, but again, why? As I see it, everyone has been conditioned to this lie that computers naturally slow down over time, because that's the way it has always been relative to the speed of current software. Originally, that was for a good reason—I'm glad programs now use full-color GUIs. But now?

What would actually happen if Moore's law ended tomorrow, and we were no longer able to make computers faster than they are today? I suspect that a (slim) majority of computer users would actually benefit. Not hardcore gamers, not scientists, and certainly not software developers--some people really do need as much performance as they can get. But for the people who just need to message friends, write documents, check email, etc., the experience would be unchanged—except that their current computers would never slow down!


I absolutely agree. It seems like most software developers only start optimizing code once our software starts feeling slow on our top-of-the-line development machines. As a result, every time we get faster computers we write slower code. When the M1 macs and the new generation of AMD (and now intel) chips came out 18 months or so ago, I spent big. I figured I had about 2 years of everything feeling fast before everyone else upgraded, and all the software I use slowed down again.

Years ago while I was at a startup, I accidentally left my laptop at work on a Friday. I wanted to write some code over the weekend. Well, I had a raspberry pi kicking around, so I fired up nodejs on that and took our project for a spin. But the program took ages to start up. I hadn't noticed the ~200ms startup time on my "real" computer, but on a r.pi that translated to over 1 second of startup time! So annoying! I ended up spending a whole morning profiling and debugging to figure out why it was so slow. Turns out we were pulling in some huge libraries and only using a fraction of the code inside. Trimming that down made the startup time ~5x faster. When I got into the office on monday, I pulled in my changes and felt the speed immediately. But I never would have fixed that if I hadn't spent that weekend developing on the raspberry pi.

Since then I've been wondering there's a way to do this systematically. Have "slow CPU tuesdays" or something, where everyone in the office turns off most of our CPU cores out of solidarity with our users. But I'm not holding my breath.


I've never expected my computer to run worse over time. There's no real mechanism for that to even happen; it works fine until it fails completely.

Recently it's become less possible to run the same software for 10+ years because so many things are subscription only and have unnecessary networking, which makes it necessary to patch security flaws, and then you have to accept whatever downgrade the vendor forces on you.

Older applications that you used to be able to just install run just as well as they did the day they came out on the hardware available at the time. The idea that computers "get worse" is entirely a phenomenon of the industry being full of incompetence. Even (or perhaps especially) programmers at FAANG companies are just not very good at their jobs.

Check out the argument Casey Muratori got into with the Microsoft terminal maintainers about how slow the thing was. He got the standard claims about how "oh it's so complex and Unicode is difficult and he's underestimating how hard it is", so he wrote a renderer in a few hours that was orders of magnitude faster, used way less memory, and had better Unicode support.


There is (or at least was) some truth in computers getting worse over time.

File system fragmentation was a very significant problem when most people still used HDDs as their primary mass storage media. SSDs are far less affected by fragmentation because of much faster random access times, but HDDs and thus performance suffered.

The Windows Registry is an arcane secret not even Microsoft fully comprehends at this point, and it can get very messy if a user installs and uninstalls lots of programs frequently. This is, of course, a problem with uninstallers not uninstalling cleanly and not a problem with Windows or the users. With so much crap moving to Chrome online-software-as-a-service outfits, users aren't (un)installing as many programs as frequently anymore, but an unkempt Windows installation can definitely slow down over time.

Software in general also just gets more and more bloated as the moons pass. More bloated software means less efficient use of hardware, meaning less performance and more user grief over time.


I have a netbook from around 2010. It has 2 GB of RAM and a single core Atom processor. It boots to a full Linux GUI desktop in a minute or so. It can handle CAD software, my editor, and my usual toolchain, if a bit slowly. It even handles HD video and the battery still holds a 6 hr charge.

But it doesn't really have enough RAM to run a modern web browser. A few tabs and we are swapping. That's unusably slow. A processor that's 5 or 20x slower is tolerable often. Working set not fitting in RAM is thrashing with a 1000x slowdown. And so this otherwise perfectly useful computer is garbage. Not enough RAM ends a machine's useful life before anything else does these days, in my experience.


Enable ZRAM. I run luakit with just 1GB of RAM and a compressed GB of ZRAM.

Atom n270 netbook, go figure.

Also, run this to get a system wide adblocker:

    git clone https://github.com/stevenblack/hosts
  
    cd hosts

    sed -n '/Start Steven/,$p' < hosts > hosts.append

    sudo cat hosts.append >> /etc/hosts
EDIT: wrong URL


Same here, except I fine-tuned the kernel to boot under 10 s.

Of course it can't run all today's bloated software, but we're talking about the operating system, here, not the applications.


That's fine for those desktop users which don't care about spinning fans, but many users are on laptops, and care about battery life. An inefficiently coded app might keep the CPU in high levels even if it's absolutely not required for the app because it is just a chat app or such.


> For now developers can just slap a web app into some chromium based wrapper […]

making 10% of users unreachable in order to more easily reach the other 90%. yeah, it’s a fine business strategy. though i do wish devs would be more amenable to the 10% of users who end up doing “weird” things with their app as a result. a stupid number of chat companies will release some Electron app that’s literally unusable on old hardware, and then freak out when people write 3rd party clients for it because it’s the only real option you left them.


Prices per GB of RAM are starting to plateau, or at least not fall as quickly:

https://aiimpacts.org/trends-in-dram-price-per-gigabyte/

DRAM density and cost isn't improving like it used to.

Also memory efficiency is about more than just total DRAM usage; bus speeds haven't kept pace with CPU speeds for a long time now. The more of the program we keep close to the CPU -- in cache -- the happier we are.


You're part of the reason why we're stuck in the mess most people, actual idiots, don't even acknowledge as a mess.


True but the products I build in this inefficient way solve other messes so you win some you lose some




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: