Let’s pause for a bit and dwell on the absurd amount of RAM it takes to run it even after this exercise. Anyone here remember when QNX shipped a demo in 2000 with a kernel, GUI, web browser and an email client on a single 3.5” floppy? The memory footprint was also a few megabytes. I’m not saying we should be staying within some miserly arbitrary constraints, but my goodness something that draws UI and manages processes has not grown in complexity by four orders of magnitude in 20 years.
Hasn't it, though? HDR, fluid animations, monstrous resolutions, 3D everything, accessibility, fancy APIs for easier development allowing for more features, support for large amounts of devices, backwards compatibility, browsers are almost unrecognizable in featureset to the point they resemble an OS unto themselves, email clients have stayed mostly the same at least except for the part that they also ship a browser and few of us even use 'em anymore!
Some of those features combine exponentially in complexity and hardware requirements, and some optimizations will trade memory for speed.
Not going to defend particular implementations, but requirements? Those have definitely grown more than we give them credit.
That's the desktop compositor. Windows 7 already had one and ran on 1 GB of RAM.
> accessibility
Not everyone needs it, so it should be an optional installable component for those who do.
> fancy APIs for easier development allowing for more features
That still use win32 under the hood. Again, .net has existed for a very long time. MFC has existed for an even longer time.
> support for large amounts of devices
No one asked for Windows on touchscreen anything. Microsoft decided that themselves and ruined the UX for the remaining 99% of the users that still use a mouse and a keyboard.
> backwards compatibility
That's what Microsoft does historically, nothing new here.
> browsers are almost unrecognizable in featureset to the point they resemble an OS unto themselves
No one asked for this. My personal opinion is that everything app-like about browsers needs to be undone, yesterday, and they should again become the hypertext document viewers they were meant to be. Even JS is too much, but I guess it does have to stay.
I think you have to reason this one out. Your statement, to me, doesn’t hold water.
Let’s start with HDR. That requires the content that’s being rendered to have higher bit depth. Not all of this is stored in GPU memory at once, a lot is stored in system RAM and shuffled in and out.
Now take fluid animations. The interpolation of positions isn’t done solely on the GPU. It’s coordinated by the CPU. I don’t think this one necessarily adds ram usage but I think your comment is incorrect.
And lastly with resolutions, the GPU is only responsible for the processing and output. You still need high resolution data going in. This is easily observed by viewing any low resolution image. It will be heavily blurred or pixelated on a high resolution screen. That stands to reason that the OS needs to have high enough resolution assets to accommodate high resolution screens. Now these aren’t all stored on disc necessarily as high resolution graphics but they have to be stored in memory as such.
——
As to the rest of your points, they basically boil down to: I don’t want it so I don’t see why a default install should have it. Other people do want a highly feature full browser that can keep up with the modern web. And given that webviews are a huge part of application rendering today, the browser actively contributes to memory usage.
>> Let’s start with HDR. That requires the content that’s being rendered to have higher bit depth. Not all of this is stored in GPU memory at once, a lot is stored in system RAM and shuffled in and out.
HDR can still fit in 32bit pixels. At 4k X 2k we have 8 megapixels or 32MB frame buffer. With triple buffering that's still under 100MB. Video games have been doing all sorts of animation for decades. It's not a lot of code and a modern CPU can actually composite a desktop in software pretty well. We use the GPU for speed, but that doesn't have to mean more memory.
The difference between 2000 and 2023 is the quantity of data to move and like I said, that about 100MB
Unintuitively, your two questions are somewhat at odds with each other.
The more work you do on the GPU, the more you need to shuffle because the more GPU memory you’d use AND the more state you’d need to check back on the CPU side, causing sync stalls. It’s not insurmountable, and macOS puts a lot more of its work on the GPU for example. Windows is a little more conservative in that regard.
Here are some more confounding factors:
- Every app needs one or more buffers to draw into. Especially with hidpi screens this can eat up memory quick. The compositor can juggle these to try and get some efficiency, but it can’t move all the state to the GPU due to latency.
- you also need to deal with swap memory. You’d ultimately need to shuffle date back to the system ram and then to disk and back which is fairly slow. It’s much better theoretically on APUs though.
Theoretically, APUs stand to solve a lot of these issues because they blur the lines of GPU and CPU memory.
Direct storage doesn’t address the majority of these concerns though. It only means the CPU doesn’t need to load data first to shuffle it over, but it doesn’t help if the CPU does need to access said data or schedule it.
It’s largely applicable mainly to games where resource access is known ahead of time.
Only if you’re dealing with just the desktop environment and don’t allow the user to load applications. Or if those apps also didn’t allow dynamicism of any kind, like loading images from a website
> > browsers are almost unrecognizable in featureset to the point they resemble an OS unto themselves
> No one asked for this. My personal opinion is that everything app-like about browsers needs to be undone, yesterday, and they should again become the hypertext document viewers they were meant to be. Even JS is too much, but I guess it does have to stay.
People did ask for this, because it made them a lot of money.
You should recognize your opinion is a minority one outside of tech (and possibly, there too).
To wit, virtually no one is jumping to Gopher or Gemini.
What people want is a way to run amazon.com (and gmail and slack and so on), on any of their devices, securely, and without the fuss of installing anything.
Ideally the first-time use of amazon.com should involve nothing more than typing "amazon" and hitting enter. It should to show content almost instantly.
Satisfying that user need doesn't require a web browser. If OS vendors provided a way to do that today, we'd be using it. But they don't.
OS vendors still don't understand that. They assume people forever want to install software via a package manager. They assume software developers care about their platform's special features enough to bother learning Kotlin / Swift / GTK / C# / whatever. And they assume all software users run should be trusted with all of my local files.
Why is docker popular? Because it lets you type the name of some software. The software is downloaded from the internet. The software runs on linux/mac/windows. And it runs in a sandbox. Just like the web.
The web - for all its flaws - is still the only platform which delivers that experience to end users.
I'd throw out javascript and the DOM and all that rubbish in a heartbeat if we had any better option.
> What people want is a way to run amazon.com (and gmail and slack and so on)
Guess what, both GMail and Slack have video calls. They use WebRTC. The browser has to support it. So the WebRTC code is a part of it.
> Ideally the first-time use of amazon.com should involve nothing more than typing "amazon" and hitting enter. It should to show content almost instantly.
And it does. Open an incognito tab, type amazon.com, it's pretty crazy how fast it loads, with all the images.
You're just proposing to move all the complexity of the browser into some other VM that would have to be shipped by default by all OS platforms before it could become useful.
Java tried exactly this, and it never took off in the desktop OS world. It wasn't significantly slimmer than browsers either, so it wouldn't have addressed any of your concerns.
Also, hyperlinking deep into and out of apps is still something that would be very very hard to achieve if the apps weren't web native - especially given the need to share data along with the links, but in a way that doesn't break security. I would predict that if you tried to recreate a platform with similar capabilities, you would end up reinventing 90% of web tech (though hopefully with a saner GUI model than the awfulness of HTML+CSS+JS).
> You're just proposing to move all the complexity of the browser into some other VM that would have to be shipped by default by all OS platforms before it could become useful.
I'm not proposing that. I didn't propose any solution to this in my comment. For what its worth, I agree with you - another java swing style approach would be a terrible idea. And I have an irrational hate for docker.
If I were in solution mode, what I think we need is all the browser features to be added to desktop operating systems. And those features being:
- Cross platform apps of some kind
- The app should be able to run "directly" from the internet in a lightweight way like web pages do. I shouldn't need to install apps to run them.
- Fierce browser tab style sandboxing.
If the goal was to compete with the browser, apps would need to use mostly platform-native controls like browsers do. WASM would be my tool of choice at this point, since then people can make apps in any language.
Unfortunately, executing this well would probably cost 7-10 figures. And it'd probably need buy in from Apple, Google, Microsoft and maybe GTK and KDE people. (Since we'd want linux, macos, ios, android and windows versions of the UI libraries). Ideally this would all get embedded in the respective operating systems so users don't have to install anything special, otherwise the core appeal would be gone.
Who knows if it'll ever happen, or if we'll just be stuck with the web forever. But a man can dream.
My thinking is that, ultimately, if you want to run the same code on Windows, MacOS, and a few popular Linux distros, and to do so on x86 and ARM, you need some kind of VM that translates an intermediate code to the machine code, and that implements a whole ton of system APIs for each platform. Especially if you want access to a GUI, networking, location, 3D graphics, Bluetooth, sound etc. - all of which have virtually no standardization between these platforms.
You'll then have to convince Microsoft, Apple, Google, IBM RedHat, Canonical, the Debian project, and a few others, to actually package this VM with their OSs, so that users don't have to manually choose to install it.
Then, you need to come up with some system of integrating this with, at a minimum, password managers, SAML and OAuth2, or you'll have something far less usable and secure than an equivalent web app. You'll probably have to integrate it with many more web technologies in fact, as people will eventually want to be able to show some web pages or web-formatted emails inside their apps.
So, my prediction is that any such effort will end-up reimplementing the browser, with little to no advantages when all is said and done.
Personally, I hate developing any web-like app. The GUI stack in particular is atrocious, with virtually no usable built-in controls, leading to a proliferation of toolkits and frameworks that do half the job and can't talk to each other. I'm hopeful that WASM will eventually allow more mature GUI frameworks to be used in web apps in a cross-platform manner, and we can forget about using a document markup language for designing application UIs. But otherwise, I think the web model is here to stay, and has in fact proven to be the most successful app ecosystem ever tried, by far (especially when counting the numerous iOS and Android apps that are entirely web views).
> You'll then have to convince Microsoft, Apple, Google, IBM RedHat, Canonical, the Debian project, and a few others, to actually package this VM with their OSs, so that users don't have to manually choose to install it.
I think this is the easy part. Everyone is already on board with webassembly. The hard part would be coming up with a common api which paves over all the platform idiosyncrasies in a way that feels good and native everywhere, and that developers actually want to use.
> what I think we need is all the browser features to be added to desktop operating systems.
I trust you are aware Microsoft did exactly that, and the entire tech world exploded in annger, and the US Government took Microsoft to court to make them undo it on the grounds that integrating browser technology into the OS was a monopolistic activity[0].
While I agree with you, I don’t think the people really wanted this. I mean, life wasn’t miserable when web apps didn’t existed.
We could have lived in an alternative universe where we succeeded to teach people the basics of how to use the computer as a powerful tool for themselves.
Instead, corporations rushed to make most of the things super easy to make billions on the way.
I’d even say that this wasn’t really a problem until they realized that closed computers allowed them more control and more money.
So yeah, now we are stuck with web apps on closed systems and most people are happy with it, that’s true.
And, as the time passes, we are loosing the universal access to "the computer". Instead of a great tool for enabling power to the people, it’s being transformed to a prison to control what the people can do, see and even think.
ps : When I say "computer" I include PC, phones, tablets, voice assistants … everything with a processor running arbitrary programs.
I disagree.
When I want to deliver a piece of software to my parents I first think about a web solution (they are a symbol for me for >80% of PC users).
I just uninstalled a browser tool bar from my step father's pc last weekend.
There are simply to many bad actors out there.
The browser sandbox works pretty well against them.
My parents have become very hesitant to install anything, even iOS updates, because they don't like change and fear that they might do something wrong.
I agree that JS is not a gold standard. Still it works most of the time and with typescript stapled on top it is acceptable.
Time has proven again and again (not only in tech) that the simple solutions will prevail.
Want to change it? Build a simpler and better solution.
I don't like that too but that's human nature at work.
I'm so sick of people shutting down valid opinions because they have a "minority opinion" about tech. That tech slobbers so messily over the majority -- and, seemingly, ONLY the majority -- is a massive disservice to all of the nerds and power users that put these people where they are today.
Maybe, instead of shutting those opinions down, you should reflect on how you, in whatever capacity you serve our awful tech overlords, can work to make these voices more heard and included in software/feature design
I hear you, but OP said 'no one asked for this' but people did ask for this. The whole argument was about popularity of the idea to add features to browsers.
I'd also like to add that accessibility is not a binary that's either on or off. Parent comment might be thinking of features for people with high disability ratings, but eventually everyone has some level of disability. Some even start of life with one: color blindness, vision impairment. Most people have progressive near vision loss (presbyopia) as they age.
Also, disability may not be permanent. I recently underwent major surgery and for at least a few days afterwards using my cell phone was nearly impossible. I resorted voice control a few times because I did not have the coordination or cognitive function to type. (Aside: cell phones in general are accessibility dumpster fires, but it took a major life event to demonstrate to me how bad it really is.)
So no, accessibility is not just a toggle switch or installable library. In fact, I hope future UI design incorporates some kind of non-intrusive learning and adaptability, such that when the system detects the user continually making certain kinds of errors, the UI will adapt to help.
Of course. Navigating around the install process without accessibility already enabled is going to be a non-starter for many.
As for why all the bloat? I speculate it's because accessibility features are a second-class citizen at best, and when it comes to optimizing and streamlining, all the effort in development goes into the most-used features, whether or not they are the most essential.
I'm suggesting the modern accessibility support doesn't need more memory than the entirety of windows 95. So 4MB extra, or let's say 10x that to be generous.
Yes. At least in Windows 10 is a disaster. Without high contrast, which looks terrible, it draws gray colors on light background making it difficult to read.
Accessibility is much more than just labels for a screen reader. Please stop trivializing anything that you don’t use directly, it’s a common thread between all your comments, and it’s a disservice to both the points you’re trying to make and the people who actually use those things .
Accessibility includes interaction design, zoom ability, audio commands, action link ups, alternate rendering modes, alternate motion modes, hooks for assistive devices to interact with the system. It goes far deeper into the system than just labels for a screen reader.
If you stopped to just think about the vast number of disabilities out there, you’d realize how untrue your statement is.
All that extra crap doesn't make any sense, when the earliest versions of Windows up to ~7 had controls to let you adjust the UI to exactly how you'd like it, which is of course very important for accessibility.
Then starting with Windows 8, they removed a lot of those features. 11 is even worse.
My point is that accessibility being a thing shouldn't ruin the UI for the people who don't need it. There's no need to visually redesign anything to introduce accessibility. Apps don't need to be made aware whether some control has focus because the user has pressed the tab key, or because it's being focused by a screen reader, or because of some other assistive technology. Colors and font sizes can also be configured and they've been configurable since at least Windows 3.1 — and that is exposed to apps.
Again, I don't see how the things you specified can't be built into existing win32 APIs and why anything needs to be designed from the ground up to support them.
Your point about “apps don’t need to be made aware” is precisely the reason accessibility is part of the system UI framework.
Accessibility is also not something that is just a binary. You may be slightly short sighted and need larger text, you might need an OS specified colour palette that overrides the apps rendering. There’s just so many levels of nuance here. It’s not just “apps can configure a palette”, it’s that they need to work across the system
If you have the time, I really suggest watching the Apple developer videos on accessibility to see why it’s not just as simple as you put it. Microsoft do a lot of great work for accessibility too , they just don’t have much content up to delve into it.
As to why it has to be developed from the ground up, it doesn’t, but it needs to be at the foundation regardless. Apple for example didn’t redo their UI for accessibility, however Microsoft take a more “we won’t touch existing stuff in case we break it” approach to their core libs.
Also , again, I’d point out that you’re purposefully trying to trivialize something you don’t use.
> It’s not just “apps can configure a palette”, it’s that they need to work across the system
There is a system-provided color palette. I don't know where this UI is in modern Windows, but in versions where you could enable the "classic" theme, you could still configure these colors. They are, of course, exposed to apps, and apps are expected to use them to draw their controls. That, as well as theme elements since XP.
> Microsoft take a more “we won’t touch existing stuff in case we break it” approach to their core libs.
Making sure you don't break existing functionality is called regression testing. I'm sure Microsoft already does a lot of it for each release.
And actually it's not quite that. The transition from 9x to NT involved swapping an entire kernel from underneath apps. Most apps didn't notice it. In fact, the backwards compatibility is maintained so well that I can run apps from the 90s — built for, and only tested on, the old DOS-based Windows versions — on my modern ARM Mac, in a VM, through an x86 -> ARM translation layer.
> Accessibility includes interaction design, zoom ability, audio commands, action link ups, alternate rendering modes, alternate motion modes, hooks for assistive devices to interact with the system. It goes far deeper into the system than just labels for a screen reader.
I wonder where the current status quo lies in regards to both desktop computing and web applications/sites. Which OSes and which GUI frameworks for those are the best or worst, how do they compare? How have they evolved over time? Which web frameworks/libraries give one the best starting point to iterate upon, say, component libraries and how well they integrate with something like React/Angular/Vue?
Sadly I'm not knowledgeable enough at the moment to answer all of those in detail myself, but there are at least some tools for web development.
For example, this seems to have helpful output: https://accessibilitytest.org
There was also this one, albeit a bit more limited: https://www.accessibilitychecker.org
I also found this, but it seemed straight up broken because it couldn't reach my site: https://wave.webaim.org/
From what I can tell, there are many tools like this: https://www.w3.org/WAI/ER/tools/
And yet, while we talk about accessibility occasionally, we don't talk about how good of a starting point certain component frameworks (e.g. Bootstrap vs PrimeFaces/PrimeNG/PrimeVue, Ant Design, ...) provide us with, or how easy it is to setup build toolchains for automated testing and reporting of warnings.
As for OS related things, I guess seeing how well Qt, GTK and other solutions support the OS functionality and what that functionality even is is probably a whole topic in of itself.
Accessibility checkers can be helpful, particularly for catching basic errors before they ship. The large majority of accessibility problems a site can have cannot be identified by software, humans need to find them.
Current Bootstrap is not bad if you read and follow all of their advice. I'm not claiming there are no problems lurking amongst their offerings.
If you search for "name-of-thing accessibility" and don't find extensive details about accessibility in the thing's own documentation, it probably does a poor job. A framework can't prevent developers from making mistakes.
"The large majority of accessibility problems a site can have cannot be identified by software"
Bold statement. I used to work in exactly that area and the reality is humans often simply don't bother finding many of the accessibility issues that automated tools can and do find. Even if such a tool isn't able to accurately pinpoint every possible issue, and inevitably gives a number of false positives (the classic being expecting everything to have ALT text, even when images are essentially decorative and don't provide information to the user), the use of it at least provides a starting point for humans to be able to realistically find the most serious issues and ensure they're addressed.
However I would never claim that good accessibility support requires significantly more (e.g. >2x) resources, and certainly not at the OS level.
In fact, you typically get better accessibility if you use the built-in OS (or browser) provided controls, which are less resource intensive than the fancy custom ones app seems to like using these days (even MS's own apps are heavy on custom-controls for everything).
I currently work in this area (web accessibility) and am just repeating what is commonly understood. When considering what WCAG criteria cover (which is not even everything that could pose a barrier to people with disabilities), most failures to meet the criteria cannot be identified by software alone.
For example, the classic I would say is not whether an image needs an alt attribute or not but whether an image's alt attribute value is a meaningful equivalent to the image in the context where it appears.
I'm not sure what kind of "resources" you're referring to. If you mean computing resources (CPU, RAM, etc.) standard, contemporary computers do seem to have enough for current assistive technologies, one doesn't need to buy a higher end computer to run them. If you mean OS resources for supplying assistive technologies and accessibility APIs, mainstream OS's are decent but specifically for screen readers there's a lot of room for improvement.
> Which OSes and which GUI frameworks for those are the best or worst, how do they compare?
Hands down macOS/iOS are the leaders here with Cocoa/SwiftUI/UIKit etc (ultimately basically the same). The OS also has many hooks to allow third party frameworks to tie in to the accessibility.
Windows is second in my opinion. Microsoft does some good work here but it’s not as extensive in terms of integrations and pervasiveness due to how varied their ecosystem is now. They do however do excellent work on the gaming side with their accessibility controllers.
In terms of UI frameworks, Qt is decent but not great . Electron actually does well here because it can piggy back off the work done for web browsers. Stuff like Imgui etc rank at the bottom because they don’t expose the tree to the OS in a meaningful way.
I can’t speak to web frameworks. In theory it shouldn’t matter as long as the components are good. Many node frameworks try and install a11y as a package to encourage better accessibility.
I switched from windows to macOS, which I’ve been using as my daily driver for the last year or so. Using the touchpad (or maybe the Magic Mouse) is basically a requirement to use “vanilla” macOS. Yes, you can install additional programs to help with window management, etc., but in my experience macOS is absolutely horrible when it comes to accessibility, from this standpoint. Maybe it’s better for colors, TTS, etc.?
I’m not sure what walls you might have been hitting but macOS is completely useable with speech direction. I had to quite recently add better accessibility support to an app I worked on and I was basically navigating the entire system with voice control and keyboard hotkeys.
Voice control in particular is really handy with the number and grid overlays for providing commands.
I’ll check it out. But this seems to approach accessibility as a feature to be turned on or off. Most of what it enables, based on Apple docs, is not just enabled in Windows and many Linux window managers I’ve used, but it’s something that developers actively utilize.
That's not where macOS came from. For Windows and Linux, "in the beginning was the command line" but not for Macs.
There's plenty one can do in macOS and its native applications with a keyboard by default, those that need more can enable "Use keyboard navigation to move focus between controls." Those that need even more enable Full Keyboard Access. These settings aren't on be default because Apple has decided they'd just get in the way and/or confuse people who use the keyboard but rely on it less.
In Safari specifically, by default pressing Tab doesn't focus links as it does in every other browser because most people use a cursor to activate links, not the keyboard. There also tend to be a lot more links than what Tab does focus, form inputs.
Macs try to have just enough accessibility features enabled by default that anyone who needs more can get to the setting to turn it on. Something I just learned Macs have that other OS/hardware doesn't is audible feedback for the blind to login when a Mac is turned on while full disk encryption is enabled.
I'm not claiming Apple gets everything right or that their approach is the best, I'm just trying to describe the basics of what's there and the outlook driving the choices.
I want touchscreen support on Windows. But guess what? Multitouch worked in Windows 7. If Windows still supported theming basic controls then Microsoft could enable touch screen support in most applications by setting a theme, similar to how they enhance contrast if you enable that feature.
I understand that bigger stuff and better graphics involve more RAM and the switch to 64 bit doubled the pointer sizes (which is why you can't meaningfully run Windows 7 x64 on 1GB of RAM like you can the 32 bit version) but with 4GB of system RAM you should be able to fit everything in and then some.
You actually can, as various Linux distributions demonstrate. The algorithms and APIs aren't as well developed, but better window control/accessibility APIs don't take up more than a megabyte of RAM.
People do ask for many Microsoft features, such as the appification of the interface and the Microsoft store. Just because you didn't ask for it, doesn't mean it's not necessary. However, Microsoft has known for years how to build and implement those requests in a much more compact environment.
My take is still the same old cynical one: as resources become cheaper, developers become lazier. I don't want to go back to the days of racing the beam with carefully planned instructions but the moment Electron gained any popularity the ecosystem went too far. "Yes but our customers want features more than a small footprint" is the common excuses I hear, but that's ignoring all the people calling various support channels or just being miserable with their terribly slow machine.
> as resources become cheaper, developers become lazier.
At most places I've worked it's a struggle to get time allocated towards necessary refactoring that'll ensure new features can be delivered in a timely fashion.
I'd love to spend time making the product more efficient but unless I can demonstrate immediate and tangible business value in doing so, it's never going to be approved over working on new features.
>No one asked for Windows on touchscreen anything. Microsoft decided that themselves and ruined the UX for the remaining 99% of the users that still use a mouse and a keyboard.
I have several devices, including a couple Linux PC's, an M1 macbook air, and a Microsoft Surface Go. If Windows 11 didn't support touchscreens, I would have gone with an iPad. However, Windows 11 is the _best_ touchscreen OS to-date.
Unlike iOS or iPadOS, Windows 11 runs desktop apps and combines the convenience of touchscreen scrolling/interaction with the desktop experience. Windows 11 does this very, very well.
I'm curious if you've used Chrome OS recently, there's a lot of good work there too. Touch is there if you need it with the keyboard open, then goes into tablet mode if the laptop is convertible or detachable. The touch/tablet UI has lost many rough edges in the last 2-3 years, and it hasn't affected the mouse/keyboard mode most people use Chromebooks for.
I don't use Windows anymore but I remember thinking "this is exactly what I've always wanted from a convertible/touch-support-in-desktop OS"...
I think I first saw it running on a Geforce with 64MB of RAM. Even then it was smooth as butter.
Now that I think about it, Mac OSX was doing GPU compositing back in 2000/2001 and those machine usually only had about 16MB of VRAM. I remember it running fairly well on a 2005 MacMini G4 with 32MB of VRAM.
The first versions of Mac OS X only supported software rendering. GPU compositing didn't show up until 2003, in Mac OS X 10.2. It was branded as "Quartz Extreme".
I did not know that! There was about a 6-7 year gap between 1997-2004 where I didn't really do much with Mac's. But your timeline seems spot on, it was 10.3 when they introduced Expose into the system. A great demonstration of the GPU functionality in action.
Actually, IIRC the only requirement for DWM to work was a GPU that supports shaders, because that's what makes the window border translucency/blur effect possible.
Compatible driver, actually. There were at least DWM 1.0 (Vista) and DWM 1.2 (Win7), but Intel never provided a compatible driver for... 915? Series, so you could't enable composition on them, despite hardware were capable enough.
Prodigy had vector based graphics in a terminal back in the 1980’s. Granted, that targeted EGA and 2400 baud modems, but I wonder how well it would work on modern hardware if you just gave it a 4k, 24bit frame buffer, and fixed up the inevitable integer overflows.
Actually, I've run Citrix (ancestor of Remote Desktop) on a 14.4k modem. Once all the bitmaps are downloaded and cached (those app launch 1/2 screen splash pages were murder), it ran pretty well. The meta graphic operations (lines, circles, fills, etc.), fonts, etc. worked fine. Any large pixmap operations were crushing, but most productivity apps didn't use those as much as you'd think.
You didn't ask. It is, as you say, your personal opinion.
From my POV, current Web is fine and the fact that browsers are powerful liberated us from writing specialized desktop apps for various OSes. I am much happier writing a Web UI than hacking together Win32 or Qt-based apps. Or, God forbid, AVKON Symbian OS UI. That was its own circle of hell.
> liberated us from writing specialized desktop apps for various OSes
I use macOS and I very much dislike anything built with cross-platform GUI toolkits, and especially the web stack. And it's always painfully obvious when something is not native. It doesn't behave like the rest of the system. It's not mac-like. It draws its own buttons from scratch and does its own event handling on them instead of using NSButton. I don't want that kind of "liberation". I want proper, native, consistent apps. Most other people probably do too, they just don't realize that or can't put it into words.
The only counter-example out there known to me is IntelliJ-based IDEs. They're built with Swing, but they do somehow feel native enough.
Also, developer experience is not a something users care about. And I'm saying that as a developer myself. Do use fancy tools to make your job easier, sure, but avoid those of them that stay inside your product when you ship it.
I don’t like the direction GUIs have gone either, and think the JavaScript-ization of everything has been pretty dumb. But it seems that bloat is doing well in the market.
Users might not care about developer experience, but everything is a trade off: developer time is a cost, the cost of producing software is an input into how much it needs to cost. Users seem to want features delivered quickly, without much regard to implementation quality.
Users just don't have much say in the matter. Case in point: Discord and Slack are atrocious UX-wise. You're still forced to use them because, as with any walled-garden communication service, you aren't the one making this choice.
Hold up. It's been ~14 years since Apple shipped machines with 2GB of memory as base their base model.
macOS (and iOS) have incredibly good screen reader support, as well as all of the things you're complaining about in your original comment at the top of this thread. Clearly those things are absolutely gobbling memory, and yet you don't seem to connect the dots that they're directly contributing to high memory requirements of macOS?
I mean, 8GB on stock machines today is barely manageable. You can't buy a Mac with less than 8GB today; you can't even buy a phone with 2GB or less. I'm not sure you're in an position to rail against high-memory bloat in computing today.
p.s. I say this as someone who uses macOS as their daily driver and has for a very long time
> I'm not sure you're in an position to rail against high-memory bloat in computing today.
Nobody is a hypocrite for buying X gigabytes of ram but also wanting the naked operating system to use a much smaller amount, or wanting single programs to use a much smaller amount.
> macOS (and iOS) have incredibly good screen reader support, as well as all of the things you're complaining about in your original comment at the top of this thread. Clearly those things are absolutely gobbling memory, and yet you don't seem to connect the dots that they're directly contributing to high memory requirements of macOS?
What makes a screen reader gobble memory?
And it definitely shouldn't gobble memory when it's not running.
Mainly the TTS engine being ready for input, stuff like that. Of course you could go to Linux where you have to enable assistive technologies support before the whole desktop understands that they should work with screen readers. I'm guessing that there is where accessibility does take up RAM and resources.
Screen reader support by itself doesn't gobble memory. Android has had it for ages, and still runs on devices with less than 1 GB RAM (Android Wear watches).
Running several instances of Chromium though... You'll probably run one anyway at all times as your actual web browser, but additional ones in the form of "oh so easy to build" Electron apps don't help. In Apple's eyes, though, you should absolutely ignore other browsers and use Safari exclusively. It might not be as much of a memory hog as Chrome — I haven't researched this, this is simply my guesses.
I also heard that M1 Macs are better at memory management compared to Intel. Again, I don't have any concrete evidence to back this up, but knowing Apple, it's believable.
It liberated you as developer. As developer, I could understand. As user I hate you. You never provide me as user with native experience via web UI. You use custom controls, which broke conventions of native controls a little bit there and here. You can not use full power of OS (YouTube or Spotify player doesn't pause itself when workstation is locked, my native player of choice does). You eat my resources. You cannot make your application consistent with application from other vendor, so I need to remember different pattern for different apps. Your typical browser app doesn't have ANY features for power users, like shortcuts for all commands and useful keyboard controls (not to mention full customization of these controls, toolbars, etc). Damn you and your laziness!
But I understand, that most of my complains are complains of power user with 25+ years of experience and muscle memory, and I'm not target auditory for almost any new app. You win :-(
Everything is a trade-off. If, as a developer, you have to spend ungodly hours on learning multiple UIs, you will have less time left for the actual business logic of your app. Which, from the user's side, means one of the following three:
a) nice looking, but less capable apps,
b) more expensive apps, or, apps that have to be paid even if they could be free in an alternate universe,
c) limited availability - app X only exists for Windows and not Mac, because either a Mac programmer isn't available or would be too expensive.
Developing for multiple UIs at once is both prone to errors and more expensive, you wind up paying for extra developers, extra testers/QA, extra hardware and possibly extra IDEs and various fees. Such extra cost may be negligible for Google, but is absolutely a factor for small software houses outside the richest countries, much more so for "one person shows" and various underfunded OSS projects.
I remember the hell that was Nokia Series 60 and 90 programming. Nokia churned out a deluge of devices that theoretically shared the same OS, but they had so many device-specific quirks and oddities on the UI level that you spent most of the time fighting with (bad) emulators of devices you could not afford to buy. This is the other extreme and I am happy that it seems to be gone forever.
If your application can be useful on different OSes (and now there are only 3 OSes in existence, as porting Desktop application on Mobile requires completely different UI and UX no matter what technology you use!), break it into business logic and UI and find partner or hire developer who love to develop native UIs for other OS. MVC pattern is old and well known (though not fashionable now, I understand).
OSS projects are completely different story, of course, no questions to OSS developers.
I prefer to pay $200 for native application than $100 for Electron one.
Oh, whom do I try to fool? Of course, it will be Electron app with $9.95/month subscription now :-(
"break it into business logic and UI and find partner or hire developer who love to develop native UIs for other OS"
As I said in my previous comment, this is quite expensive, and people inside Silicon Valley rarely understand how cash-strapped the software sector in the rest of the world is. In Czech, we have a saying "a person who is fed won't believe a hungry one" and SV veterans that are used to reams of VC cash supporting even lossy businesses like Uber have no idea that the excess spending needed to hire another developer for several months somewhere in Warsaw or Bucharest may kill a fledgling or small company.
An optional installable component until you have a blind person doing tech support and they have to walk the tech illiterate person through installing the accessibility stack lol. Oh until you suddenly go blind from a condision or accident and have to mouse your way through the interface, blind, to install that component. Ugh ableism.
Being someone from back in those days they'll tell you to load up that software that fits in some small amount of memory. You'll find most of it is crash filled hot garbage missing the features you need. And the moment you wanted to add new features you'd start importing libraries bloating the size of the application.
In generally I would say far more stable and far more features.
But this of course is in the metrics of how you measure. Windows 3.1 for example was a huge crashing piece of crap that was locking up all the damned time. MacOS at the time wasn't that much better. Now I can leave windows up for a month at a time between security reboots. Specialized Windows and Linux machines in server environments on a reduced patching schedule will stay up far longer, but generally security updates are what limits the uptime.
I remember running Windows applications and receiving buffer overflow errors back then. If you got a buffer overflow message today you'd think that either your hardware is going bad or someone wrote a terrible security flaw into your application. And back there were security flaws everywhere. 'Smashing the stack for fun and profit' wasn't wrote till '95, well after consumers had started getting on the internet in mass. And if you were using applications like Word or Excel you could expect to measure 'crashes' per week rather than the crashes per month, many of which are completely recoverable in applications like office.
I'm on Win11 for 1.5 year or so (Win11 Insider Beta channel) and before was on Win10 Beta/Dev channels - so what I remember so far, I was warned multiple times, suggested to pick a time and only after user (me) not shown any cooperation, system was forcibly rebooted, which for consumer grade (I have Pro version) edition is fine, from my PoV. I don't want [my] system and systems around me be a part of botnets like Linux boxes of all sorts.
> For many applications Windows 10 saves state and comes back right where you started on a security update reboot.
This needs application support, by this broad definition all operating systems "saves state and comes back right where you started on a security update reboot".
Resolutions and HDR are one area where I think the extra RAM load and increasing application sizes make complete sense. However, my monitors run at 1080p, don't do HDR, and my video files are rncoded at a standard colour depth. Despite all this, the standalone RAM usage has increased over the years.
Accessibility has actually gone down with the switch to web applications. Microsoft had an excellent accessibility framework with subpar but usable tooling built in, and excellent commercial applications to make use of the existing API, all the way back in Windows XP. Backwards compatibility hacks such as loading old memory manager behaviour and allocating extra buffer space for known buggy applications may take more RAM but don't increase any requirements.
Inagree that requirements have grown but not by the amount reflected in standby CPU and memory use. Don't forget that we've also gained near universal SSD availability, negating the need for RAM caches in many circumstances. And that's just ignoring the advance in CPU and GPU performance since the Windows XP days, when DOS was finally killed off and the amount of necessary custom tailored assembly drastically dropped.
When I boot a Windows XP machine, the only thing I can say I'm really missing as a user is application support. Alright, the Windows XP kernel was incredibly insecure, so let's upgrade to Windows 7 where the painful Vista driver days are behind us and the kernel has been reshaped to put a huge amount of vulnerable code in userspace. What am I missing now? Touchscreen and pen support works, 4k resolutions and higher are supported perfectly fine, almost all modern games still run.
The Steam hardware survey says it all. The largest target audience using their computer components the most runs one or two 1080p monitors, has 6 CPU cores and about 8GB of RAM. Your average consumer doesn't need or use all of that. HiDPI and HDR are a niche and designing your OS around a niche is stupid.
True, but with those access times you can wait a lot longer for content to be loaded into RAM. Hard drives are the reason for many years games needed to duplicate their assets, for example, because seek times slowed down loading time and putting the same content in the file twice but at the right place would speed up the loading process significantly. Games today still have special HDD code because of the difference in performance class.
SSDs won't replace RAM but many RAM caches aren't performance critical; sometimes you need your code to be reasonably fast on a laptop with a 5400 rpm hard drive and then you have very little choice of data structures. With the random access patterns SSDs allow this complication quickly disappears. You won't find many Android apps that will cache 8MB block reads to compensate for a spinning hard drive, for example.
I ran e16 and then e17 as my main desktop back in the day for a good while. I'm sorry but what we had back then was nowhere even near what I'm talking about.
What do we have today that we didn't have back them in term of bare desktop support?
I mean we have larger resolution support amd scaling for hidpi, better/faster indexation, better touchpad support. Can you name anything else? Localization hasn't progressed that much, I remember already being able to select some barely spoken dialects on linux 20y ago?
NeXTSTEP 3.1 ran fine at 1152x832 4 shade mono with 20MB of RAM. 32MB if you were running color.
It was also rendering Display PostScript on a 25Mhz '040. One of the first machines in its day that allowed you to drag full windows, rather than frames on the desktop. High tech in action!
You could also do that in '92-ish on RISC OS 3 running on a 1MB Acorn Archimedes with 12MHz ARM2 processor, with high quality font antialiasing. Those were the days!
> Hasn't it, though? HDR, fluid animations, monstrous resolutions, 3D everything, accessibility, fancy APIs for easier development allowing for more features, support for large amounts of devices, backwards compatibility,
Soo the feature windows 7 had? I remember running 3D desktop with compositor and fancy effects on 1GB RAM laptop on Linux...
Please don't miss the malware within the OS itself: license services for software such as Microsoft Office and Adobe, and other applications without enough resource bounds.
It is still possible to have a snappy computer experience. Go Linux, use a very configurable distro (Arch, Gentoo, NixOS), choose a lightweight DE and app ecosystem and it will get you there for the most part.
Browsers are still going to be the sticking point, but with agressive adblockers/noscript and hardware that's not terribly old (NVMe storage is priority 1), and you should be set.
But of course, snappiness isn't free and you have to spend some time doing first time set-ups and maintenance.
I’ve got 16 gb of ram and the browser is using most of them. I can literally see the swap space emptying when i have (as in “im forced to”) sacrifice my browsing session (xkill the browser) due to constant swap out to disk.
And I’m using a pci gen 3 nvme disk, and already lowered swappiness.
At this point, my primary use case for ad blocking isn't the ad blocking itself, it is 1. the security of blocking ads, one of the worst vectors for attacks in the while and 2. the greatly reduced system resources my browser uses. The ad blocking itself is a further bonus.
I'd suggest again to try NoScript/Adblocking, disable hardware accel if you have it enabled, enable it if disabled.
If even there you have no success, I'd suggest you try something like EndeavorOS. Browsers have issues but that is not normal. You're not using Debian stable on the desktop, right?
> Let’s pause for a bit and dwell on the absurd amount of RAM it takes to run it even after this exercise.
I agree and I find the apologists to be completely wrong. I run a modern system: 38" screen, 2 Gbit/s fiber to the home. I'm not "stuck in the past" with a 17" screen or something.
The thing flies. It's screaming fast as it should be.
But I run a lean Debian Linux system, with a minimal window manager. It's definitely less bloated than Ubuntu and compared to Windows, well: there's no comparison possible.
Every single keystroke has an effect instantly. After reading the article about keyboard latency, I found out my keyboard was one of the lower latency one (HHKB) and yet I finetuned the Linux kernel for USB 2.0 polling of keyboard inputs to be even faster. ATM I cannot run a real-time kernel because NVidia refuses to modify a non-stock kernel (well that's what the driver says at least) but even without that: everything feels and actually is insanely fast.
I've got a dozen virtual workspace / virtual desktops and there are shortcuts assigned to each of them. I can fill every virtual virtual desktop with apps and windows and then switch like a madman on my keyboard between each of them: the system doesn't break a sweat.
I can display all the pictures on my NVME SSD in full screen and leave my finger on the arrow key and they'll move so quickly I can't follow.
Computers became very fast and monitor size / file sizes for a regular usage simply didn't grow anywhere near close as quickly as CPU performances.
I love this comment for getting at what, in my opinion, Linux on the desktop is all about: spending your time with a computer that just plain feels great to use.
It doesn't look the same for everyone, of course. It's not about some universalizable value like minimalism. But this is a great example of one of the dimensions in which a Linux desktop can just feel really great in an almost physical way.
The low-end requirements for Debian GNU/Linux (assuming a graphical install and an up-to-date version) are not that low. They're higher than the low-end for Windows XP when it first came out, and probably close to the official requirements for "Vista-capable" machines. So yes, it's a very efficient system by modern standards but it does come with some very real overhead nevertheless.
VIsta capable wasn't that capable. It required 1GB of RAM to run well. Debian with ZRAM and light DE could run with 512MB of RAM and Seamonkey + UBlock Origin with patience.
Could you explain why any of the things he says make you think a number that high? I'm just finishing building my first PC ever (I've used computers for ... 20 years? But never actually built one). And I have a 1TB NVMe SSD from Western Digital, it was about 60 bucks. I have a 35" BenQ monitor from work, I think it was around $600 at the time of purchase. I don't have fiber at my home, but from what I understand, it's not prohibitively expensive in general. Anyway - I went with 16gb RAM. That felt like a reasonable starting point considering my current and prior daily driver were there as well. My build (minus admittedly expensive monitor) was, to me compared to the Macbooks I usually have for work, a fairly modest $1250 or so. So, roughly the same specs - seems like nothing too crazy?
Likely the fiber setting expectations, 2gbps is the "premium" tier in many places, where the monthly difference between fast and the top speed is about the same as 32gb of ram.
Personally, XFCE is pretty lightweight, customizable and stable. I actually did a blog post where I ran Linux Mint (based on Ubuntu) with XFCE, so you can get a rough idea of it in some screenshots: https://blog.kronis.dev/articles/a-week-of-linux-instead-of-...
It's not particularly interesting or pretty, but it works well and does most if not everything that you might need, so is my choice for a daily driver. Here's the debian Wiki page on it: https://wiki.debian.org/Xfce
Apart from that, some folks also like Cinnamon, MATE, GNOME or even KDE. I think the best option is to play around in Live CDs with them and see which feel the best for your individual needs and taste. Do note that Ubuntu as a base distro might give you fewer hassles in regards to proprietary drivers, if you don't care about using only free software much.
> I still can't believe that Windows has turned into such a bloatware/mess that i'm actually at a point i can't live with it anymore...
That is quite unfortunate, especially because there is some software that I think Windows does better - like MobaXTerm or 7-Zip (with its GUI), FancyZones (for window snapping) and most of the GPU control panels.
That said, as that article of mine shows, Linux on the desktop is actually way better than it used to be years ago and gaming is definitely viable, even if not all of the titles are supported. Sadly, I don't think that'll happen anytime soon, but it's still better than nothing!
I'll still probably go the dual boot route with Windows and Linux, or maybe will have a VM with GPU passthrough for specific games on Linux, although I haven't gotten it working just right, ever. Oh well, here's to a brighter future!
Well, other operating systems are still relatively decent at this. My main Linux install eats ~250 MiB of RAM after startup, and I've spent exactly zero amount of time on that, so it can be trimmed down further. That's on a system with 32 GiB of RAM — if you have less RAM, it will eat even less since page tables and various kernel buffers will be smaller.
FreeBSD can be comfortably used on systems with 64 MiB of RAM for solving simple tasks like a small proxy server. It has always been good at this — back in the day cheap VPS often used it (and not Linux) precisely because of its small memory requirements.
Today's version of icewm takes around 16mb of memory, xorg will add a bit to it.
There are smaller window managers but I choose this one as an example as it gives a similar experience to the windows xp of olds.
I have done the experience on slimming as much as possible a desktop. But once you start a web browser with more than 3 tabs memory usage goes through the roof. In the end if you want to run an old system with 512mb of ram you are kind of forced to use the web sans javascript and images. You are almost better off using links or w3m and tui apps for everything. Netsurf can work too if you are limiting the number of tabs open.
One a 1GB system you can definitely use a modern web browser but you definitely need the ad/trackers removal extensions and have to take good care of not opening more than 2-3 tabs or you will start swapping a lot.
I've worked on several projects where performance was an afterthought. After the product scaled a bit, it suddenly became the highest priority - but at that time, it was impossible to fix. At least for everyone that created the problem to begin with.
I've taught high performance data structures to dev teams. I've tried to explain how a complex problem can sometimes be solved with a simple algorithm. I've spent decades on attempting to show coworkers that applying a little comp-sci can have a profound effect in the end.
But no. Unfortunately, it always fails. The mindset is always "making it work" and problem solving is brute-forcing the problem until it works.
It takes a special kind of mindset to keep systems efficient. It is like painting a picture, but most seem to prefer doing it with a paint roller.
And I've worked on systems where months were essentially squandered on performance improvements that never paid off because we never grew the customer base sufficiently for them to be worth while...
I'm all for dedicating time and effort towards producing performant code, but it does come at a cost - in some cases, a cost of maintainability (for an extreme example there's always https://users.cs.utah.edu/~elb/folklore/mel.html). In fact I'd suggest in general if you design a library of functions where obviousness/clarity/ease-of-use are your primary criteria, performance is likely to suffer. And there are undoubtedly cases where the cost of higher-grade hardware (in terms of speed and storage capacity) is vastly lower than that of more efficient software. I'd also say performance tuning quite often involves significant trade-offs that lead to much higher memory usage - caching may well be the only way to achieve significant gains at certain scales, but then as you scale up even further, the memory requirements of the caching start to become an issue in themselves. If there were a simple solution it would have been found by now.
Performance is not the same as efficiency, and efficiency can't be solved with more hardware.
Let's say I build a sorting algorithm that is O(N^2) complexity and works fine for small inputs (takes <1 millisecond), but it is going to be used for large data systems. Suddenly it takes hundreds of thousands of hours to sort the data.
One of the corps I worked with went full scalability in their architecture. One-click deployments, dynamic scaling of servers, rebalancing of databases, automatic provisioning of storage. They were handling 40-50k requests pr. second with their 15-ish large server farm, which could sale down to 5 servers, or up to 50-ish before it began to wobble.
I got called in because the company had gotten a large client that needed 100k requests pr. second. They tried scaling the system to fit the need, but the whole thing got unstable and their solution was "more operations people to manage it".
I built a custom solution for the backend. Took about two months. The new system could do about 2100k requests pr. second on one server. Scalability of the new system was ~90% efficient as well, so lots of capacity for the future.
None of their developers understood computers or the science behind them. They were all educated and experienced developers, but none of that were applied to the problem. They were just assembling parts from the hardware store until something worked, and the resulting Frankenstein's Monster was put into production.
I'm struggling to believe any single server could usefully service 2100k (well over 2 million!) requests per second. Even Google, with their vast farm of servers, reportedly only process less than 100k requests per second globally. I've certainly read of servers capable of handling in the order of 1000k requests per second as a benchmark, but the requests are usually pretty trivial (the one I saw literally did no input processing at all, and just returned a single fixed byte! But was written in Java, surprisingly.)
At any rate, I would think a tiny % of real-life systems actually need to be able to support that sort of load, and bringing in somebody to do the scalability work once it's clear it's needed seems like exactly the right strategy to me.
Not serving, but handling 2100k requests. Your skepticism is rightly placed, as the HTTP protocol is yet an example of an inefficient protocol that nonetheless is used as the primary protocol on the internet. Some webservers[1] can serve millions of requests pr. second, but I'd never use HTTP in code where efficiency is key.
No, I'm talking about handling requests. In this particular case, requests (32 to 64 bytes) were flowing through several services (on the same computer). I replaced the processing chain with a single application to remove the overhead of serialization between processes. Requests were filtered early in the pipeline, which made a ~55% reduction in the work needed.
Requests were then batched into succinct data structures and processed via SIMD. Output used to be JSON, but I instead wrote a custom memory allocator and just memcpy the entire blob on to the wire.
Before: No pre-filtering, off-the-shelf databases (PSQL), queue system for I/O, named pipes and TCP/IP for local data transfer. Lots of concurrency issues, thread starvation and I/O bound work.
After: Agnessive pre-filtering, succinct data structures for cache coherence, no serialization overhead, SIMD processing. Can saturate a 32 core CPU with almost no overhead.
My go to on this was I remember running Debian on a Pentium 166 with 32MB of RAM back in 98/99. It would boot to the desktop only using 6MB. It wasn't flash but it could handle the basics. Heck Windows XP would boot to Desktop using a little under 70MB.
But this isn't just Windows, currently I am on Kubuntu 22.04 and it is using about 1.5GB to get to the Desktop! Yes it is very smooth and flash but it seems like a bit much to do this.
This is why I am interesting in projects like Haiku and Serenity OS, they may bring some sanity back into these things.
Obviously there were huge limitations but it shows what can be done. This fit on one 170K floppy and ran on a 1.44mhz 8 bit machine with 64K of RAM.
In the 1990s I ran both Linux and Windows on less than 64M of RAM with IDEs, web browsers, games, and more.
If I had to guess what were possible today I’d fall back on the fairly reliable 80/20 rule and posit that 20% of todays bloat is intrinsic to increases in capability and 80% is incidental complexity and waste.
For me also the Commodore came to mind. It had 64K RAM and a 64K address range, because other things had to fit in there not all RAM was usable at the same time. Clock frequency of the PAL model was 985kHz (yes KILO), so not even a full MHz.
Yet, I could do
* word processing
* desktop publishing
* working with scanned documents
* spreadsheets
* graphics
* digital painting
* music production
* gaming (even chess)
* programming (besides BASIC and ASM I had a Pascal compiler)
* CAD and 3D design (Giga CAD [1], fascinated me to no end)
* Video creation [2]
For all this tasks there were standalone applications [3] with their own GUI [4]. GEOS was an integrated GUI environment with its own applications and way ahead of its times [5].
It still blows my mind how all this could work.
My first Linux ran on a 386DX with 4M of RAM, but this probably as low as on can get. Even the installer choked on that little RAM and one had to create a swap partition and swapon manually after booting but before the installer ran. In text mode it was pretty usable though, X11 worked and I remember having GNU chess runnning, but it was quite slow.
[3] Some came on extension modules which saved RAM or brought a bit of extra RAM, but we are still talking kilobytes. For examples see https://www.pagetable.com/?p=1730
[4] Or sort of TUI if you like; the strict separation of text and graphics mode wasn't a thing in the home computer era.
[5] The standalone apps were still better. So, as advanced GEOS was, I believe it was not used productively much.
But if you had to use that software now, you'd say (justly) that it's extremely basic and limited, and that interoperability with other systems is not great.
Fully agreed. When I tried my old Commodore a while ago I couldn't stand the 50Hz screen flicker for long. Unbelievable that back in the day I spent hours on hours in front of that stroboscope.
For me it's more about the excitement that the bright future lay ahead of us so clearly mixed with a slight disappointment that I sometimes feel we could have made more out of it.
Zawinski’s Law - every program on windows attempts to expand until it can be your default PDF viewer. [cloud file sync, advertising display board, telemetry hoover, App Store…]
2GB is a ridiculous amount of memory for something like an OS.
When we see egregious examples like Windows, then it's arguable having constraints might be desirable. It is well-known that "limitation breeds creativity". It's certainly true outside of "tech" companies. I have witnessed it first hand. "Tech" companies are some sort of weird fantasy world where stupidity disguised as cleverness is allowed to run rampant. No more likely place for this to happen than at companies that have too much money.
Many of them do not need to turn a profit and a small number have insane profits due to lack of meaningful competition (cf. honest work). With respect to the later, it's routine to see (overpaid) employees of these companies brag on HN about how they do very little work.
The standards were also a lot lower back then. Modern-day users expect high resolution and color depth for their screens, seamless hardware support no matter what they plug into the machine, i18n with incredibly complex text rendering rather than a fixed size 80x25 text mode with 256 selectable characters, etc. These things take some overhead. We can improve on existing systems (there's no real reason for web browsers to be as complex as they are, a lot of it is pure bells and whistles) but some of that complexity will be there.
You can achieve good memory footprints with Linux, just 2 or 3 years ago I was daily driving Arch linux with bspwm as a window manager, it used only 300 mb, for me is pretty darn good, but as soon as I opened my vscode with a JS project my ram usage was at 12gb. We have a lot of bloatware everywhere, that’s pretty sad.
edit: This remind me a some rants from Casey Muratori about VS[0] and windows terminal[1]
I remember needing to get Windows XP under 64MB of RAM so that I could run some photo editing software. XP was relatively feature complete, I don't think Windows currently ships with 32x the features of XP (64MB vs 2048MB minimum).
Linux with a lightweight GUI for example can still run okay with just 128MB. I ran Debian with LXDE on an old IBM T22, and it worked perfectly well. Running Firefox was a problem (but did eventually work), but something more stripped down like NetSurf or Dillo is blazingly fast.
Seamonkey is still around and works nicely on low spec machines (not sure about 128Mb though) as a step up from NetSurf. You get a graphical email client and news/gopher built in. Also a rudimentary Web page editor. Printing a Web page to pdf is a rough and ready way of getting rich text onto paper. The 'legacy' version of the noscript plug-in will allow selective use of javascript (saves battery and helps with security which might be an issue).
We don’t need to worry about memory efficiency until we stop getting gains via hardware improvements. For now developers can just slap a web app into some chromium based wrapper, make sure their code doesn’t have any n^2 in it and you’re good to go.
Tell that to the person on a fixed income who has to invest in an expensive new machine because their 2015 laptop (which still has a whopping 4 GB of memory and a CPU that would have been top-of-the-line twenty years ago) has become unusably slow.
Software efficiency is a serious equity and environmental issue, and I wish more people would see it that way.
This is why I argue that one of the best things that the Free/Libre software developer community can start doing is optimizing for lower spec machines. Microsoft and Apple are either too closely nit or drectly provide hardware to be prolonging the liftime of hardware they sell. In optimizing open OS's it can prolong the life time of hardware by a significant margin and it means that lower in come folks are not left in the dark. I don't just mean in well off countries, but if you are in the lower classes of the global south - there is no other option.
These was (is? - Not sure) a version of Firefox for PowerPC MaxOSX - TenFour Browser - that brought forward modern features/support of Firefox to Macs that were long past their prime. They mentioned that their favorite story in time of development was "One of my favourite reports was from a missionary in Myanmar using a beat-up G4 mini over a dialup modem; I hope he is safe during the present unrest. "
This is what can happen when things are optimized for the people, not the business. This is part of why I still use a Core 2 Duo as my daily runner, if it ain't broke don't fix it.
>This is why I argue that one of the best things that the Free/Libre software developer community can start doing is optimizing for lower spec machines.
But isn't the primary application for these machines going to be the web browser, which is pulling in so much JS insanity that the web sites won't render well anyway?
To be fair, if you forced programmers to write efficient code you would just make everything more expensive and flood the market for unskilled labor with university graduates that can't find their own ass.
If it really did come down to that, I would still rather people had to pay more for software and less for hardware, because software has a comparatively minuscule environmental impact.
Actually no. If programmers actually learned how to properly program the machines, we'd not be in the mess we are in right now. Abstraction is the cancer that got us to where we are.
Nobody has any actual clue what they're doing, everyone keeps writing code for the compiler hoping for the best and the rest of the world has to buy new machines because the programmers of the last decades sucked.
That, btw, includes most of you people reading this. You're fucking welcome.
No need to invest into an expensive new machine; a device from 5 years ago, with some more added RAM, would already be pretty adequate. Typing this from a Thinkpad T470 which was introduced in 2017, which is my main workhorse machine.
A top-of-the-line laptop CPU from 20 years ago likely just doesn't support addressing more than 4GB or RAM. Forcing it to work on modern resource-heavy Web pages and media is like forcing a GPU from 20 years a go to run Skyrim. It's just not adequate.
20 years ago is pushing it a bit. But 12 years ago, in 2008, I used a computer with 4GB of RAM in order to:
• Read the news
• Post on social media
• Make video calls
• Use instant messaging
• Create and edit word documents/presentations/spreadsheets
Today I use my computer for all of those same things... and yet they all require drastically more memory (and CPU, GPU, etc). What happened, and how does this benefit consumers? Yeah, modern web pages are resource-heavy—but to what end†?
In some cases, the requirements really did change. For example, I can now watch videos in 4K; my 2008 computer could handle 1080p, but I imagine it wouldn't have handled 4K as well. However, I suspect many users of old machines would be perfectly happy to drop down to a lower resolution.
---
† Something I find amusing in all this... people often say they're glad Flash applets died because they were slow. Nowadays, instead of Flash, we use browser apps written in Javascript. I wonder how "slow" those apps would run if you threw them on a computer from the Flash era. (This isn't to discount other problems with Flash, although I do think it has a worse reputation than it deserves.)
You can use computer with 4 GB of ram today for all things you've mentioned. It might swap here and there and not be as snappy, but generally it'll work.
I think that Apple just recently stopped to sell 4 GB computers. And their phones from the last year sells with 4 GB RAM while being perfectly able to do all the things you've mentioned as well.
Yeah, I agree - I don't think ram is usually the problem.
I used to have a 2016 dual core macbook pro with integrated graphics and 8gb of RAM or something. The machine was great when I got it, but 18 months ago it was limping along and I finally decided to get rid of it.
And it wasn't any 3rd party apps that killed the machine. Every time the machine started up, iphotoanalysisd or some random spotlight service or something would be eating all my CPU. It was always a 1st party Apple app which was making it slow. And the graphics felt laggy. Just moving windows around felt bad a lot of the time, even when I didn't have anything open. Xcode would sometimes lag the machine so much that it would drop keystrokes while I was typing. I had RAM to spare - it was a CPU problem.
In the process of wiping the machine, I booted into Recovery mode and it booted the 2016 recovery image of macos. Holy smokes - the graphics were all wicked fast again! I spent a couple minutes just moving windows around the screen in recovery mode marvelling at how fast it felt.
I wonder if reverting to an old version of macos would have fixed my problems. As far as I can tell, this was all Apple's fault. They piled up macos with so much crap that their own computers couldn't cope with the weight. I also wonder if they broke the intel graphics drivers in some point release somewhere along the way, or they started relying on GPU features that Intel's driver only had software emulation for.
Modern macos still has all that crap - the efficiency cores in my M1 laptop are constantly spinning up for some ridiculous Apple service or something. But at least now that still leaves me with 8 P-cores for my actual work. Its ridiculous.
I bet linux would have worked great on that old laptop. I wish I tried it before turfing the machine.
While I do agree with this, it seems worse than that - I've observed with a number of systems that used to run well 5 or so years ago that they simply don't any more, even with exactly the same OS and essentially the same software.
I don't know to what degree that is because of actual hardware deterioration (or least, file system fragmentation), vs additional gumpf getting automatically installed and slowing it down (but every time I've tried to remove such gumpf, it hasn't really helped), or even because of user perception (but I don't buy that this explains cases of apps that now take over 30 seconds to start up, when they used to take 5 at most). I have one 8+ year old Windows 7 machine in particular that I use for music streaming, and basically can't be used for 30 seconds at least after logging in - but then seems mostly fine after that.
"Windows Rot" is definitely a thing but it can be cleared out by doing a clean reinstall of the OS. While this can be time consuming, you'd likely be doing it anyway if you got a new machine.
No idea where I'd even find an installer for Windows 7! It does make me wonder whether upgrading it would actually help. But for now it works well enough I'd rather not risk it (the other thing I use it for is some old software that requires a FAT partition for its licensing to work!).
Why? Are the types of things I want that laptop to do different today than they were 8 years ago? Sure, apps and websites are heavier, but I'd posit the things most people do on their computers haven't changed in a decade at least.
> That has never been a reasonable expectation in the history of computing.
Yes, but again, why? As I see it, everyone has been conditioned to this lie that computers naturally slow down over time, because that's the way it has always been relative to the speed of current software. Originally, that was for a good reason—I'm glad programs now use full-color GUIs. But now?
What would actually happen if Moore's law ended tomorrow, and we were no longer able to make computers faster than they are today? I suspect that a (slim) majority of computer users would actually benefit. Not hardcore gamers, not scientists, and certainly not software developers--some people really do need as much performance as they can get. But for the people who just need to message friends, write documents, check email, etc., the experience would be unchanged—except that their current computers would never slow down!
I absolutely agree. It seems like most software developers only start optimizing code once our software starts feeling slow on our top-of-the-line development machines. As a result, every time we get faster computers we write slower code. When the M1 macs and the new generation of AMD (and now intel) chips came out 18 months or so ago, I spent big. I figured I had about 2 years of everything feeling fast before everyone else upgraded, and all the software I use slowed down again.
Years ago while I was at a startup, I accidentally left my laptop at work on a Friday. I wanted to write some code over the weekend. Well, I had a raspberry pi kicking around, so I fired up nodejs on that and took our project for a spin. But the program took ages to start up. I hadn't noticed the ~200ms startup time on my "real" computer, but on a r.pi that translated to over 1 second of startup time! So annoying! I ended up spending a whole morning profiling and debugging to figure out why it was so slow. Turns out we were pulling in some huge libraries and only using a fraction of the code inside. Trimming that down made the startup time ~5x faster. When I got into the office on monday, I pulled in my changes and felt the speed immediately. But I never would have fixed that if I hadn't spent that weekend developing on the raspberry pi.
Since then I've been wondering there's a way to do this systematically. Have "slow CPU tuesdays" or something, where everyone in the office turns off most of our CPU cores out of solidarity with our users. But I'm not holding my breath.
I've never expected my computer to run worse over time. There's no real mechanism for that to even happen; it works fine until it fails completely.
Recently it's become less possible to run the same software for 10+ years because so many things are subscription only and have unnecessary networking, which makes it necessary to patch security flaws, and then you have to accept whatever downgrade the vendor forces on you.
Older applications that you used to be able to just install run just as well as they did the day they came out on the hardware available at the time. The idea that computers "get worse" is entirely a phenomenon of the industry being full of incompetence. Even (or perhaps especially) programmers at FAANG companies are just not very good at their jobs.
Check out the argument Casey Muratori got into with the Microsoft terminal maintainers about how slow the thing was. He got the standard claims about how "oh it's so complex and Unicode is difficult and he's underestimating how hard it is", so he wrote a renderer in a few hours that was orders of magnitude faster, used way less memory, and had better Unicode support.
There is (or at least was) some truth in computers getting worse over time.
File system fragmentation was a very significant problem when most people still used HDDs as their primary mass storage media. SSDs are far less affected by fragmentation because of much faster random access times, but HDDs and thus performance suffered.
The Windows Registry is an arcane secret not even Microsoft fully comprehends at this point, and it can get very messy if a user installs and uninstalls lots of programs frequently. This is, of course, a problem with uninstallers not uninstalling cleanly and not a problem with Windows or the users. With so much crap moving to Chrome online-software-as-a-service outfits, users aren't (un)installing as many programs as frequently anymore, but an unkempt Windows installation can definitely slow down over time.
Software in general also just gets more and more bloated as the moons pass. More bloated software means less efficient use of hardware, meaning less performance and more user grief over time.
I have a netbook from around 2010. It has 2 GB of RAM and a single core Atom processor. It boots to a full Linux GUI desktop in a minute or so. It can handle CAD software, my editor, and my usual toolchain, if a bit slowly. It even handles HD video and the battery still holds a 6 hr charge.
But it doesn't really have enough RAM to run a modern web browser. A few tabs and we are swapping. That's unusably slow. A processor that's 5 or 20x slower is tolerable often. Working set not fitting in RAM is thrashing with a 1000x slowdown. And so this otherwise perfectly useful computer is garbage. Not enough RAM ends a machine's useful life before anything else does these days, in my experience.
That's fine for those desktop users which don't care about spinning fans, but many users are on laptops, and care about battery life. An inefficiently coded app might keep the CPU in high levels even if it's absolutely not required for the app because it is just a chat app or such.
> For now developers can just slap a web app into some chromium based wrapper […]
making 10% of users unreachable in order to more easily reach the other 90%. yeah, it’s a fine business strategy. though i do wish devs would be more amenable to the 10% of users who end up doing “weird” things with their app as a result. a stupid number of chat companies will release some Electron app that’s literally unusable on old hardware, and then freak out when people write 3rd party clients for it because it’s the only real option you left them.
DRAM density and cost isn't improving like it used to.
Also memory efficiency is about more than just total DRAM usage; bus speeds haven't kept pace with CPU speeds for a long time now. The more of the program we keep close to the CPU -- in cache -- the happier we are.
You are getting a whole runtime and standard library bundled in. The whole point of python is for quick and dirty scripts because saving you 4 hours is worth more than using 20mb less ram for something that gets run a couple of times.
early expectations on code interfacing and re-usability failed catastrophically
in my previous job rather than give people root access to their laptops we had to do things like running a docker image that ran 7zip and we piped the I/O to/from it, and I'm not kidding we all did this and it was only bearable thanks to bash aliases and the fact that we had 16GBs of RAM