Hasn't it, though? HDR, fluid animations, monstrous resolutions, 3D everything, accessibility, fancy APIs for easier development allowing for more features, support for large amounts of devices, backwards compatibility, browsers are almost unrecognizable in featureset to the point they resemble an OS unto themselves, email clients have stayed mostly the same at least except for the part that they also ship a browser and few of us even use 'em anymore!
Some of those features combine exponentially in complexity and hardware requirements, and some optimizations will trade memory for speed.
Not going to defend particular implementations, but requirements? Those have definitely grown more than we give them credit.
That's the desktop compositor. Windows 7 already had one and ran on 1 GB of RAM.
> accessibility
Not everyone needs it, so it should be an optional installable component for those who do.
> fancy APIs for easier development allowing for more features
That still use win32 under the hood. Again, .net has existed for a very long time. MFC has existed for an even longer time.
> support for large amounts of devices
No one asked for Windows on touchscreen anything. Microsoft decided that themselves and ruined the UX for the remaining 99% of the users that still use a mouse and a keyboard.
> backwards compatibility
That's what Microsoft does historically, nothing new here.
> browsers are almost unrecognizable in featureset to the point they resemble an OS unto themselves
No one asked for this. My personal opinion is that everything app-like about browsers needs to be undone, yesterday, and they should again become the hypertext document viewers they were meant to be. Even JS is too much, but I guess it does have to stay.
I think you have to reason this one out. Your statement, to me, doesn’t hold water.
Let’s start with HDR. That requires the content that’s being rendered to have higher bit depth. Not all of this is stored in GPU memory at once, a lot is stored in system RAM and shuffled in and out.
Now take fluid animations. The interpolation of positions isn’t done solely on the GPU. It’s coordinated by the CPU. I don’t think this one necessarily adds ram usage but I think your comment is incorrect.
And lastly with resolutions, the GPU is only responsible for the processing and output. You still need high resolution data going in. This is easily observed by viewing any low resolution image. It will be heavily blurred or pixelated on a high resolution screen. That stands to reason that the OS needs to have high enough resolution assets to accommodate high resolution screens. Now these aren’t all stored on disc necessarily as high resolution graphics but they have to be stored in memory as such.
——
As to the rest of your points, they basically boil down to: I don’t want it so I don’t see why a default install should have it. Other people do want a highly feature full browser that can keep up with the modern web. And given that webviews are a huge part of application rendering today, the browser actively contributes to memory usage.
>> Let’s start with HDR. That requires the content that’s being rendered to have higher bit depth. Not all of this is stored in GPU memory at once, a lot is stored in system RAM and shuffled in and out.
HDR can still fit in 32bit pixels. At 4k X 2k we have 8 megapixels or 32MB frame buffer. With triple buffering that's still under 100MB. Video games have been doing all sorts of animation for decades. It's not a lot of code and a modern CPU can actually composite a desktop in software pretty well. We use the GPU for speed, but that doesn't have to mean more memory.
The difference between 2000 and 2023 is the quantity of data to move and like I said, that about 100MB
Unintuitively, your two questions are somewhat at odds with each other.
The more work you do on the GPU, the more you need to shuffle because the more GPU memory you’d use AND the more state you’d need to check back on the CPU side, causing sync stalls. It’s not insurmountable, and macOS puts a lot more of its work on the GPU for example. Windows is a little more conservative in that regard.
Here are some more confounding factors:
- Every app needs one or more buffers to draw into. Especially with hidpi screens this can eat up memory quick. The compositor can juggle these to try and get some efficiency, but it can’t move all the state to the GPU due to latency.
- you also need to deal with swap memory. You’d ultimately need to shuffle date back to the system ram and then to disk and back which is fairly slow. It’s much better theoretically on APUs though.
Theoretically, APUs stand to solve a lot of these issues because they blur the lines of GPU and CPU memory.
Direct storage doesn’t address the majority of these concerns though. It only means the CPU doesn’t need to load data first to shuffle it over, but it doesn’t help if the CPU does need to access said data or schedule it.
It’s largely applicable mainly to games where resource access is known ahead of time.
Only if you’re dealing with just the desktop environment and don’t allow the user to load applications. Or if those apps also didn’t allow dynamicism of any kind, like loading images from a website
> > browsers are almost unrecognizable in featureset to the point they resemble an OS unto themselves
> No one asked for this. My personal opinion is that everything app-like about browsers needs to be undone, yesterday, and they should again become the hypertext document viewers they were meant to be. Even JS is too much, but I guess it does have to stay.
People did ask for this, because it made them a lot of money.
You should recognize your opinion is a minority one outside of tech (and possibly, there too).
To wit, virtually no one is jumping to Gopher or Gemini.
What people want is a way to run amazon.com (and gmail and slack and so on), on any of their devices, securely, and without the fuss of installing anything.
Ideally the first-time use of amazon.com should involve nothing more than typing "amazon" and hitting enter. It should to show content almost instantly.
Satisfying that user need doesn't require a web browser. If OS vendors provided a way to do that today, we'd be using it. But they don't.
OS vendors still don't understand that. They assume people forever want to install software via a package manager. They assume software developers care about their platform's special features enough to bother learning Kotlin / Swift / GTK / C# / whatever. And they assume all software users run should be trusted with all of my local files.
Why is docker popular? Because it lets you type the name of some software. The software is downloaded from the internet. The software runs on linux/mac/windows. And it runs in a sandbox. Just like the web.
The web - for all its flaws - is still the only platform which delivers that experience to end users.
I'd throw out javascript and the DOM and all that rubbish in a heartbeat if we had any better option.
> What people want is a way to run amazon.com (and gmail and slack and so on)
Guess what, both GMail and Slack have video calls. They use WebRTC. The browser has to support it. So the WebRTC code is a part of it.
> Ideally the first-time use of amazon.com should involve nothing more than typing "amazon" and hitting enter. It should to show content almost instantly.
And it does. Open an incognito tab, type amazon.com, it's pretty crazy how fast it loads, with all the images.
You're just proposing to move all the complexity of the browser into some other VM that would have to be shipped by default by all OS platforms before it could become useful.
Java tried exactly this, and it never took off in the desktop OS world. It wasn't significantly slimmer than browsers either, so it wouldn't have addressed any of your concerns.
Also, hyperlinking deep into and out of apps is still something that would be very very hard to achieve if the apps weren't web native - especially given the need to share data along with the links, but in a way that doesn't break security. I would predict that if you tried to recreate a platform with similar capabilities, you would end up reinventing 90% of web tech (though hopefully with a saner GUI model than the awfulness of HTML+CSS+JS).
> You're just proposing to move all the complexity of the browser into some other VM that would have to be shipped by default by all OS platforms before it could become useful.
I'm not proposing that. I didn't propose any solution to this in my comment. For what its worth, I agree with you - another java swing style approach would be a terrible idea. And I have an irrational hate for docker.
If I were in solution mode, what I think we need is all the browser features to be added to desktop operating systems. And those features being:
- Cross platform apps of some kind
- The app should be able to run "directly" from the internet in a lightweight way like web pages do. I shouldn't need to install apps to run them.
- Fierce browser tab style sandboxing.
If the goal was to compete with the browser, apps would need to use mostly platform-native controls like browsers do. WASM would be my tool of choice at this point, since then people can make apps in any language.
Unfortunately, executing this well would probably cost 7-10 figures. And it'd probably need buy in from Apple, Google, Microsoft and maybe GTK and KDE people. (Since we'd want linux, macos, ios, android and windows versions of the UI libraries). Ideally this would all get embedded in the respective operating systems so users don't have to install anything special, otherwise the core appeal would be gone.
Who knows if it'll ever happen, or if we'll just be stuck with the web forever. But a man can dream.
My thinking is that, ultimately, if you want to run the same code on Windows, MacOS, and a few popular Linux distros, and to do so on x86 and ARM, you need some kind of VM that translates an intermediate code to the machine code, and that implements a whole ton of system APIs for each platform. Especially if you want access to a GUI, networking, location, 3D graphics, Bluetooth, sound etc. - all of which have virtually no standardization between these platforms.
You'll then have to convince Microsoft, Apple, Google, IBM RedHat, Canonical, the Debian project, and a few others, to actually package this VM with their OSs, so that users don't have to manually choose to install it.
Then, you need to come up with some system of integrating this with, at a minimum, password managers, SAML and OAuth2, or you'll have something far less usable and secure than an equivalent web app. You'll probably have to integrate it with many more web technologies in fact, as people will eventually want to be able to show some web pages or web-formatted emails inside their apps.
So, my prediction is that any such effort will end-up reimplementing the browser, with little to no advantages when all is said and done.
Personally, I hate developing any web-like app. The GUI stack in particular is atrocious, with virtually no usable built-in controls, leading to a proliferation of toolkits and frameworks that do half the job and can't talk to each other. I'm hopeful that WASM will eventually allow more mature GUI frameworks to be used in web apps in a cross-platform manner, and we can forget about using a document markup language for designing application UIs. But otherwise, I think the web model is here to stay, and has in fact proven to be the most successful app ecosystem ever tried, by far (especially when counting the numerous iOS and Android apps that are entirely web views).
> You'll then have to convince Microsoft, Apple, Google, IBM RedHat, Canonical, the Debian project, and a few others, to actually package this VM with their OSs, so that users don't have to manually choose to install it.
I think this is the easy part. Everyone is already on board with webassembly. The hard part would be coming up with a common api which paves over all the platform idiosyncrasies in a way that feels good and native everywhere, and that developers actually want to use.
> what I think we need is all the browser features to be added to desktop operating systems.
I trust you are aware Microsoft did exactly that, and the entire tech world exploded in annger, and the US Government took Microsoft to court to make them undo it on the grounds that integrating browser technology into the OS was a monopolistic activity[0].
While I agree with you, I don’t think the people really wanted this. I mean, life wasn’t miserable when web apps didn’t existed.
We could have lived in an alternative universe where we succeeded to teach people the basics of how to use the computer as a powerful tool for themselves.
Instead, corporations rushed to make most of the things super easy to make billions on the way.
I’d even say that this wasn’t really a problem until they realized that closed computers allowed them more control and more money.
So yeah, now we are stuck with web apps on closed systems and most people are happy with it, that’s true.
And, as the time passes, we are loosing the universal access to "the computer". Instead of a great tool for enabling power to the people, it’s being transformed to a prison to control what the people can do, see and even think.
ps : When I say "computer" I include PC, phones, tablets, voice assistants … everything with a processor running arbitrary programs.
I disagree.
When I want to deliver a piece of software to my parents I first think about a web solution (they are a symbol for me for >80% of PC users).
I just uninstalled a browser tool bar from my step father's pc last weekend.
There are simply to many bad actors out there.
The browser sandbox works pretty well against them.
My parents have become very hesitant to install anything, even iOS updates, because they don't like change and fear that they might do something wrong.
I agree that JS is not a gold standard. Still it works most of the time and with typescript stapled on top it is acceptable.
Time has proven again and again (not only in tech) that the simple solutions will prevail.
Want to change it? Build a simpler and better solution.
I don't like that too but that's human nature at work.
I'm so sick of people shutting down valid opinions because they have a "minority opinion" about tech. That tech slobbers so messily over the majority -- and, seemingly, ONLY the majority -- is a massive disservice to all of the nerds and power users that put these people where they are today.
Maybe, instead of shutting those opinions down, you should reflect on how you, in whatever capacity you serve our awful tech overlords, can work to make these voices more heard and included in software/feature design
I hear you, but OP said 'no one asked for this' but people did ask for this. The whole argument was about popularity of the idea to add features to browsers.
I'd also like to add that accessibility is not a binary that's either on or off. Parent comment might be thinking of features for people with high disability ratings, but eventually everyone has some level of disability. Some even start of life with one: color blindness, vision impairment. Most people have progressive near vision loss (presbyopia) as they age.
Also, disability may not be permanent. I recently underwent major surgery and for at least a few days afterwards using my cell phone was nearly impossible. I resorted voice control a few times because I did not have the coordination or cognitive function to type. (Aside: cell phones in general are accessibility dumpster fires, but it took a major life event to demonstrate to me how bad it really is.)
So no, accessibility is not just a toggle switch or installable library. In fact, I hope future UI design incorporates some kind of non-intrusive learning and adaptability, such that when the system detects the user continually making certain kinds of errors, the UI will adapt to help.
Of course. Navigating around the install process without accessibility already enabled is going to be a non-starter for many.
As for why all the bloat? I speculate it's because accessibility features are a second-class citizen at best, and when it comes to optimizing and streamlining, all the effort in development goes into the most-used features, whether or not they are the most essential.
I'm suggesting the modern accessibility support doesn't need more memory than the entirety of windows 95. So 4MB extra, or let's say 10x that to be generous.
Yes. At least in Windows 10 is a disaster. Without high contrast, which looks terrible, it draws gray colors on light background making it difficult to read.
Accessibility is much more than just labels for a screen reader. Please stop trivializing anything that you don’t use directly, it’s a common thread between all your comments, and it’s a disservice to both the points you’re trying to make and the people who actually use those things .
Accessibility includes interaction design, zoom ability, audio commands, action link ups, alternate rendering modes, alternate motion modes, hooks for assistive devices to interact with the system. It goes far deeper into the system than just labels for a screen reader.
If you stopped to just think about the vast number of disabilities out there, you’d realize how untrue your statement is.
All that extra crap doesn't make any sense, when the earliest versions of Windows up to ~7 had controls to let you adjust the UI to exactly how you'd like it, which is of course very important for accessibility.
Then starting with Windows 8, they removed a lot of those features. 11 is even worse.
My point is that accessibility being a thing shouldn't ruin the UI for the people who don't need it. There's no need to visually redesign anything to introduce accessibility. Apps don't need to be made aware whether some control has focus because the user has pressed the tab key, or because it's being focused by a screen reader, or because of some other assistive technology. Colors and font sizes can also be configured and they've been configurable since at least Windows 3.1 — and that is exposed to apps.
Again, I don't see how the things you specified can't be built into existing win32 APIs and why anything needs to be designed from the ground up to support them.
Your point about “apps don’t need to be made aware” is precisely the reason accessibility is part of the system UI framework.
Accessibility is also not something that is just a binary. You may be slightly short sighted and need larger text, you might need an OS specified colour palette that overrides the apps rendering. There’s just so many levels of nuance here. It’s not just “apps can configure a palette”, it’s that they need to work across the system
If you have the time, I really suggest watching the Apple developer videos on accessibility to see why it’s not just as simple as you put it. Microsoft do a lot of great work for accessibility too , they just don’t have much content up to delve into it.
As to why it has to be developed from the ground up, it doesn’t, but it needs to be at the foundation regardless. Apple for example didn’t redo their UI for accessibility, however Microsoft take a more “we won’t touch existing stuff in case we break it” approach to their core libs.
Also , again, I’d point out that you’re purposefully trying to trivialize something you don’t use.
> It’s not just “apps can configure a palette”, it’s that they need to work across the system
There is a system-provided color palette. I don't know where this UI is in modern Windows, but in versions where you could enable the "classic" theme, you could still configure these colors. They are, of course, exposed to apps, and apps are expected to use them to draw their controls. That, as well as theme elements since XP.
> Microsoft take a more “we won’t touch existing stuff in case we break it” approach to their core libs.
Making sure you don't break existing functionality is called regression testing. I'm sure Microsoft already does a lot of it for each release.
And actually it's not quite that. The transition from 9x to NT involved swapping an entire kernel from underneath apps. Most apps didn't notice it. In fact, the backwards compatibility is maintained so well that I can run apps from the 90s — built for, and only tested on, the old DOS-based Windows versions — on my modern ARM Mac, in a VM, through an x86 -> ARM translation layer.
> Accessibility includes interaction design, zoom ability, audio commands, action link ups, alternate rendering modes, alternate motion modes, hooks for assistive devices to interact with the system. It goes far deeper into the system than just labels for a screen reader.
I wonder where the current status quo lies in regards to both desktop computing and web applications/sites. Which OSes and which GUI frameworks for those are the best or worst, how do they compare? How have they evolved over time? Which web frameworks/libraries give one the best starting point to iterate upon, say, component libraries and how well they integrate with something like React/Angular/Vue?
Sadly I'm not knowledgeable enough at the moment to answer all of those in detail myself, but there are at least some tools for web development.
For example, this seems to have helpful output: https://accessibilitytest.org
There was also this one, albeit a bit more limited: https://www.accessibilitychecker.org
I also found this, but it seemed straight up broken because it couldn't reach my site: https://wave.webaim.org/
From what I can tell, there are many tools like this: https://www.w3.org/WAI/ER/tools/
And yet, while we talk about accessibility occasionally, we don't talk about how good of a starting point certain component frameworks (e.g. Bootstrap vs PrimeFaces/PrimeNG/PrimeVue, Ant Design, ...) provide us with, or how easy it is to setup build toolchains for automated testing and reporting of warnings.
As for OS related things, I guess seeing how well Qt, GTK and other solutions support the OS functionality and what that functionality even is is probably a whole topic in of itself.
Accessibility checkers can be helpful, particularly for catching basic errors before they ship. The large majority of accessibility problems a site can have cannot be identified by software, humans need to find them.
Current Bootstrap is not bad if you read and follow all of their advice. I'm not claiming there are no problems lurking amongst their offerings.
If you search for "name-of-thing accessibility" and don't find extensive details about accessibility in the thing's own documentation, it probably does a poor job. A framework can't prevent developers from making mistakes.
"The large majority of accessibility problems a site can have cannot be identified by software"
Bold statement. I used to work in exactly that area and the reality is humans often simply don't bother finding many of the accessibility issues that automated tools can and do find. Even if such a tool isn't able to accurately pinpoint every possible issue, and inevitably gives a number of false positives (the classic being expecting everything to have ALT text, even when images are essentially decorative and don't provide information to the user), the use of it at least provides a starting point for humans to be able to realistically find the most serious issues and ensure they're addressed.
However I would never claim that good accessibility support requires significantly more (e.g. >2x) resources, and certainly not at the OS level.
In fact, you typically get better accessibility if you use the built-in OS (or browser) provided controls, which are less resource intensive than the fancy custom ones app seems to like using these days (even MS's own apps are heavy on custom-controls for everything).
I currently work in this area (web accessibility) and am just repeating what is commonly understood. When considering what WCAG criteria cover (which is not even everything that could pose a barrier to people with disabilities), most failures to meet the criteria cannot be identified by software alone.
For example, the classic I would say is not whether an image needs an alt attribute or not but whether an image's alt attribute value is a meaningful equivalent to the image in the context where it appears.
I'm not sure what kind of "resources" you're referring to. If you mean computing resources (CPU, RAM, etc.) standard, contemporary computers do seem to have enough for current assistive technologies, one doesn't need to buy a higher end computer to run them. If you mean OS resources for supplying assistive technologies and accessibility APIs, mainstream OS's are decent but specifically for screen readers there's a lot of room for improvement.
> Which OSes and which GUI frameworks for those are the best or worst, how do they compare?
Hands down macOS/iOS are the leaders here with Cocoa/SwiftUI/UIKit etc (ultimately basically the same). The OS also has many hooks to allow third party frameworks to tie in to the accessibility.
Windows is second in my opinion. Microsoft does some good work here but it’s not as extensive in terms of integrations and pervasiveness due to how varied their ecosystem is now. They do however do excellent work on the gaming side with their accessibility controllers.
In terms of UI frameworks, Qt is decent but not great . Electron actually does well here because it can piggy back off the work done for web browsers. Stuff like Imgui etc rank at the bottom because they don’t expose the tree to the OS in a meaningful way.
I can’t speak to web frameworks. In theory it shouldn’t matter as long as the components are good. Many node frameworks try and install a11y as a package to encourage better accessibility.
I switched from windows to macOS, which I’ve been using as my daily driver for the last year or so. Using the touchpad (or maybe the Magic Mouse) is basically a requirement to use “vanilla” macOS. Yes, you can install additional programs to help with window management, etc., but in my experience macOS is absolutely horrible when it comes to accessibility, from this standpoint. Maybe it’s better for colors, TTS, etc.?
I’m not sure what walls you might have been hitting but macOS is completely useable with speech direction. I had to quite recently add better accessibility support to an app I worked on and I was basically navigating the entire system with voice control and keyboard hotkeys.
Voice control in particular is really handy with the number and grid overlays for providing commands.
I’ll check it out. But this seems to approach accessibility as a feature to be turned on or off. Most of what it enables, based on Apple docs, is not just enabled in Windows and many Linux window managers I’ve used, but it’s something that developers actively utilize.
That's not where macOS came from. For Windows and Linux, "in the beginning was the command line" but not for Macs.
There's plenty one can do in macOS and its native applications with a keyboard by default, those that need more can enable "Use keyboard navigation to move focus between controls." Those that need even more enable Full Keyboard Access. These settings aren't on be default because Apple has decided they'd just get in the way and/or confuse people who use the keyboard but rely on it less.
In Safari specifically, by default pressing Tab doesn't focus links as it does in every other browser because most people use a cursor to activate links, not the keyboard. There also tend to be a lot more links than what Tab does focus, form inputs.
Macs try to have just enough accessibility features enabled by default that anyone who needs more can get to the setting to turn it on. Something I just learned Macs have that other OS/hardware doesn't is audible feedback for the blind to login when a Mac is turned on while full disk encryption is enabled.
I'm not claiming Apple gets everything right or that their approach is the best, I'm just trying to describe the basics of what's there and the outlook driving the choices.
I want touchscreen support on Windows. But guess what? Multitouch worked in Windows 7. If Windows still supported theming basic controls then Microsoft could enable touch screen support in most applications by setting a theme, similar to how they enhance contrast if you enable that feature.
I understand that bigger stuff and better graphics involve more RAM and the switch to 64 bit doubled the pointer sizes (which is why you can't meaningfully run Windows 7 x64 on 1GB of RAM like you can the 32 bit version) but with 4GB of system RAM you should be able to fit everything in and then some.
You actually can, as various Linux distributions demonstrate. The algorithms and APIs aren't as well developed, but better window control/accessibility APIs don't take up more than a megabyte of RAM.
People do ask for many Microsoft features, such as the appification of the interface and the Microsoft store. Just because you didn't ask for it, doesn't mean it's not necessary. However, Microsoft has known for years how to build and implement those requests in a much more compact environment.
My take is still the same old cynical one: as resources become cheaper, developers become lazier. I don't want to go back to the days of racing the beam with carefully planned instructions but the moment Electron gained any popularity the ecosystem went too far. "Yes but our customers want features more than a small footprint" is the common excuses I hear, but that's ignoring all the people calling various support channels or just being miserable with their terribly slow machine.
> as resources become cheaper, developers become lazier.
At most places I've worked it's a struggle to get time allocated towards necessary refactoring that'll ensure new features can be delivered in a timely fashion.
I'd love to spend time making the product more efficient but unless I can demonstrate immediate and tangible business value in doing so, it's never going to be approved over working on new features.
>No one asked for Windows on touchscreen anything. Microsoft decided that themselves and ruined the UX for the remaining 99% of the users that still use a mouse and a keyboard.
I have several devices, including a couple Linux PC's, an M1 macbook air, and a Microsoft Surface Go. If Windows 11 didn't support touchscreens, I would have gone with an iPad. However, Windows 11 is the _best_ touchscreen OS to-date.
Unlike iOS or iPadOS, Windows 11 runs desktop apps and combines the convenience of touchscreen scrolling/interaction with the desktop experience. Windows 11 does this very, very well.
I'm curious if you've used Chrome OS recently, there's a lot of good work there too. Touch is there if you need it with the keyboard open, then goes into tablet mode if the laptop is convertible or detachable. The touch/tablet UI has lost many rough edges in the last 2-3 years, and it hasn't affected the mouse/keyboard mode most people use Chromebooks for.
I don't use Windows anymore but I remember thinking "this is exactly what I've always wanted from a convertible/touch-support-in-desktop OS"...
I think I first saw it running on a Geforce with 64MB of RAM. Even then it was smooth as butter.
Now that I think about it, Mac OSX was doing GPU compositing back in 2000/2001 and those machine usually only had about 16MB of VRAM. I remember it running fairly well on a 2005 MacMini G4 with 32MB of VRAM.
The first versions of Mac OS X only supported software rendering. GPU compositing didn't show up until 2003, in Mac OS X 10.2. It was branded as "Quartz Extreme".
I did not know that! There was about a 6-7 year gap between 1997-2004 where I didn't really do much with Mac's. But your timeline seems spot on, it was 10.3 when they introduced Expose into the system. A great demonstration of the GPU functionality in action.
Actually, IIRC the only requirement for DWM to work was a GPU that supports shaders, because that's what makes the window border translucency/blur effect possible.
Compatible driver, actually. There were at least DWM 1.0 (Vista) and DWM 1.2 (Win7), but Intel never provided a compatible driver for... 915? Series, so you could't enable composition on them, despite hardware were capable enough.
Prodigy had vector based graphics in a terminal back in the 1980’s. Granted, that targeted EGA and 2400 baud modems, but I wonder how well it would work on modern hardware if you just gave it a 4k, 24bit frame buffer, and fixed up the inevitable integer overflows.
Actually, I've run Citrix (ancestor of Remote Desktop) on a 14.4k modem. Once all the bitmaps are downloaded and cached (those app launch 1/2 screen splash pages were murder), it ran pretty well. The meta graphic operations (lines, circles, fills, etc.), fonts, etc. worked fine. Any large pixmap operations were crushing, but most productivity apps didn't use those as much as you'd think.
You didn't ask. It is, as you say, your personal opinion.
From my POV, current Web is fine and the fact that browsers are powerful liberated us from writing specialized desktop apps for various OSes. I am much happier writing a Web UI than hacking together Win32 or Qt-based apps. Or, God forbid, AVKON Symbian OS UI. That was its own circle of hell.
> liberated us from writing specialized desktop apps for various OSes
I use macOS and I very much dislike anything built with cross-platform GUI toolkits, and especially the web stack. And it's always painfully obvious when something is not native. It doesn't behave like the rest of the system. It's not mac-like. It draws its own buttons from scratch and does its own event handling on them instead of using NSButton. I don't want that kind of "liberation". I want proper, native, consistent apps. Most other people probably do too, they just don't realize that or can't put it into words.
The only counter-example out there known to me is IntelliJ-based IDEs. They're built with Swing, but they do somehow feel native enough.
Also, developer experience is not a something users care about. And I'm saying that as a developer myself. Do use fancy tools to make your job easier, sure, but avoid those of them that stay inside your product when you ship it.
I don’t like the direction GUIs have gone either, and think the JavaScript-ization of everything has been pretty dumb. But it seems that bloat is doing well in the market.
Users might not care about developer experience, but everything is a trade off: developer time is a cost, the cost of producing software is an input into how much it needs to cost. Users seem to want features delivered quickly, without much regard to implementation quality.
Users just don't have much say in the matter. Case in point: Discord and Slack are atrocious UX-wise. You're still forced to use them because, as with any walled-garden communication service, you aren't the one making this choice.
Hold up. It's been ~14 years since Apple shipped machines with 2GB of memory as base their base model.
macOS (and iOS) have incredibly good screen reader support, as well as all of the things you're complaining about in your original comment at the top of this thread. Clearly those things are absolutely gobbling memory, and yet you don't seem to connect the dots that they're directly contributing to high memory requirements of macOS?
I mean, 8GB on stock machines today is barely manageable. You can't buy a Mac with less than 8GB today; you can't even buy a phone with 2GB or less. I'm not sure you're in an position to rail against high-memory bloat in computing today.
p.s. I say this as someone who uses macOS as their daily driver and has for a very long time
> I'm not sure you're in an position to rail against high-memory bloat in computing today.
Nobody is a hypocrite for buying X gigabytes of ram but also wanting the naked operating system to use a much smaller amount, or wanting single programs to use a much smaller amount.
> macOS (and iOS) have incredibly good screen reader support, as well as all of the things you're complaining about in your original comment at the top of this thread. Clearly those things are absolutely gobbling memory, and yet you don't seem to connect the dots that they're directly contributing to high memory requirements of macOS?
What makes a screen reader gobble memory?
And it definitely shouldn't gobble memory when it's not running.
Mainly the TTS engine being ready for input, stuff like that. Of course you could go to Linux where you have to enable assistive technologies support before the whole desktop understands that they should work with screen readers. I'm guessing that there is where accessibility does take up RAM and resources.
Screen reader support by itself doesn't gobble memory. Android has had it for ages, and still runs on devices with less than 1 GB RAM (Android Wear watches).
Running several instances of Chromium though... You'll probably run one anyway at all times as your actual web browser, but additional ones in the form of "oh so easy to build" Electron apps don't help. In Apple's eyes, though, you should absolutely ignore other browsers and use Safari exclusively. It might not be as much of a memory hog as Chrome — I haven't researched this, this is simply my guesses.
I also heard that M1 Macs are better at memory management compared to Intel. Again, I don't have any concrete evidence to back this up, but knowing Apple, it's believable.
It liberated you as developer. As developer, I could understand. As user I hate you. You never provide me as user with native experience via web UI. You use custom controls, which broke conventions of native controls a little bit there and here. You can not use full power of OS (YouTube or Spotify player doesn't pause itself when workstation is locked, my native player of choice does). You eat my resources. You cannot make your application consistent with application from other vendor, so I need to remember different pattern for different apps. Your typical browser app doesn't have ANY features for power users, like shortcuts for all commands and useful keyboard controls (not to mention full customization of these controls, toolbars, etc). Damn you and your laziness!
But I understand, that most of my complains are complains of power user with 25+ years of experience and muscle memory, and I'm not target auditory for almost any new app. You win :-(
Everything is a trade-off. If, as a developer, you have to spend ungodly hours on learning multiple UIs, you will have less time left for the actual business logic of your app. Which, from the user's side, means one of the following three:
a) nice looking, but less capable apps,
b) more expensive apps, or, apps that have to be paid even if they could be free in an alternate universe,
c) limited availability - app X only exists for Windows and not Mac, because either a Mac programmer isn't available or would be too expensive.
Developing for multiple UIs at once is both prone to errors and more expensive, you wind up paying for extra developers, extra testers/QA, extra hardware and possibly extra IDEs and various fees. Such extra cost may be negligible for Google, but is absolutely a factor for small software houses outside the richest countries, much more so for "one person shows" and various underfunded OSS projects.
I remember the hell that was Nokia Series 60 and 90 programming. Nokia churned out a deluge of devices that theoretically shared the same OS, but they had so many device-specific quirks and oddities on the UI level that you spent most of the time fighting with (bad) emulators of devices you could not afford to buy. This is the other extreme and I am happy that it seems to be gone forever.
If your application can be useful on different OSes (and now there are only 3 OSes in existence, as porting Desktop application on Mobile requires completely different UI and UX no matter what technology you use!), break it into business logic and UI and find partner or hire developer who love to develop native UIs for other OS. MVC pattern is old and well known (though not fashionable now, I understand).
OSS projects are completely different story, of course, no questions to OSS developers.
I prefer to pay $200 for native application than $100 for Electron one.
Oh, whom do I try to fool? Of course, it will be Electron app with $9.95/month subscription now :-(
"break it into business logic and UI and find partner or hire developer who love to develop native UIs for other OS"
As I said in my previous comment, this is quite expensive, and people inside Silicon Valley rarely understand how cash-strapped the software sector in the rest of the world is. In Czech, we have a saying "a person who is fed won't believe a hungry one" and SV veterans that are used to reams of VC cash supporting even lossy businesses like Uber have no idea that the excess spending needed to hire another developer for several months somewhere in Warsaw or Bucharest may kill a fledgling or small company.
An optional installable component until you have a blind person doing tech support and they have to walk the tech illiterate person through installing the accessibility stack lol. Oh until you suddenly go blind from a condision or accident and have to mouse your way through the interface, blind, to install that component. Ugh ableism.
Being someone from back in those days they'll tell you to load up that software that fits in some small amount of memory. You'll find most of it is crash filled hot garbage missing the features you need. And the moment you wanted to add new features you'd start importing libraries bloating the size of the application.
In generally I would say far more stable and far more features.
But this of course is in the metrics of how you measure. Windows 3.1 for example was a huge crashing piece of crap that was locking up all the damned time. MacOS at the time wasn't that much better. Now I can leave windows up for a month at a time between security reboots. Specialized Windows and Linux machines in server environments on a reduced patching schedule will stay up far longer, but generally security updates are what limits the uptime.
I remember running Windows applications and receiving buffer overflow errors back then. If you got a buffer overflow message today you'd think that either your hardware is going bad or someone wrote a terrible security flaw into your application. And back there were security flaws everywhere. 'Smashing the stack for fun and profit' wasn't wrote till '95, well after consumers had started getting on the internet in mass. And if you were using applications like Word or Excel you could expect to measure 'crashes' per week rather than the crashes per month, many of which are completely recoverable in applications like office.
I'm on Win11 for 1.5 year or so (Win11 Insider Beta channel) and before was on Win10 Beta/Dev channels - so what I remember so far, I was warned multiple times, suggested to pick a time and only after user (me) not shown any cooperation, system was forcibly rebooted, which for consumer grade (I have Pro version) edition is fine, from my PoV. I don't want [my] system and systems around me be a part of botnets like Linux boxes of all sorts.
> For many applications Windows 10 saves state and comes back right where you started on a security update reboot.
This needs application support, by this broad definition all operating systems "saves state and comes back right where you started on a security update reboot".
Resolutions and HDR are one area where I think the extra RAM load and increasing application sizes make complete sense. However, my monitors run at 1080p, don't do HDR, and my video files are rncoded at a standard colour depth. Despite all this, the standalone RAM usage has increased over the years.
Accessibility has actually gone down with the switch to web applications. Microsoft had an excellent accessibility framework with subpar but usable tooling built in, and excellent commercial applications to make use of the existing API, all the way back in Windows XP. Backwards compatibility hacks such as loading old memory manager behaviour and allocating extra buffer space for known buggy applications may take more RAM but don't increase any requirements.
Inagree that requirements have grown but not by the amount reflected in standby CPU and memory use. Don't forget that we've also gained near universal SSD availability, negating the need for RAM caches in many circumstances. And that's just ignoring the advance in CPU and GPU performance since the Windows XP days, when DOS was finally killed off and the amount of necessary custom tailored assembly drastically dropped.
When I boot a Windows XP machine, the only thing I can say I'm really missing as a user is application support. Alright, the Windows XP kernel was incredibly insecure, so let's upgrade to Windows 7 where the painful Vista driver days are behind us and the kernel has been reshaped to put a huge amount of vulnerable code in userspace. What am I missing now? Touchscreen and pen support works, 4k resolutions and higher are supported perfectly fine, almost all modern games still run.
The Steam hardware survey says it all. The largest target audience using their computer components the most runs one or two 1080p monitors, has 6 CPU cores and about 8GB of RAM. Your average consumer doesn't need or use all of that. HiDPI and HDR are a niche and designing your OS around a niche is stupid.
True, but with those access times you can wait a lot longer for content to be loaded into RAM. Hard drives are the reason for many years games needed to duplicate their assets, for example, because seek times slowed down loading time and putting the same content in the file twice but at the right place would speed up the loading process significantly. Games today still have special HDD code because of the difference in performance class.
SSDs won't replace RAM but many RAM caches aren't performance critical; sometimes you need your code to be reasonably fast on a laptop with a 5400 rpm hard drive and then you have very little choice of data structures. With the random access patterns SSDs allow this complication quickly disappears. You won't find many Android apps that will cache 8MB block reads to compensate for a spinning hard drive, for example.
I ran e16 and then e17 as my main desktop back in the day for a good while. I'm sorry but what we had back then was nowhere even near what I'm talking about.
What do we have today that we didn't have back them in term of bare desktop support?
I mean we have larger resolution support amd scaling for hidpi, better/faster indexation, better touchpad support. Can you name anything else? Localization hasn't progressed that much, I remember already being able to select some barely spoken dialects on linux 20y ago?
NeXTSTEP 3.1 ran fine at 1152x832 4 shade mono with 20MB of RAM. 32MB if you were running color.
It was also rendering Display PostScript on a 25Mhz '040. One of the first machines in its day that allowed you to drag full windows, rather than frames on the desktop. High tech in action!
You could also do that in '92-ish on RISC OS 3 running on a 1MB Acorn Archimedes with 12MHz ARM2 processor, with high quality font antialiasing. Those were the days!
> Hasn't it, though? HDR, fluid animations, monstrous resolutions, 3D everything, accessibility, fancy APIs for easier development allowing for more features, support for large amounts of devices, backwards compatibility,
Soo the feature windows 7 had? I remember running 3D desktop with compositor and fancy effects on 1GB RAM laptop on Linux...
Please don't miss the malware within the OS itself: license services for software such as Microsoft Office and Adobe, and other applications without enough resource bounds.
It is still possible to have a snappy computer experience. Go Linux, use a very configurable distro (Arch, Gentoo, NixOS), choose a lightweight DE and app ecosystem and it will get you there for the most part.
Browsers are still going to be the sticking point, but with agressive adblockers/noscript and hardware that's not terribly old (NVMe storage is priority 1), and you should be set.
But of course, snappiness isn't free and you have to spend some time doing first time set-ups and maintenance.
I’ve got 16 gb of ram and the browser is using most of them. I can literally see the swap space emptying when i have (as in “im forced to”) sacrifice my browsing session (xkill the browser) due to constant swap out to disk.
And I’m using a pci gen 3 nvme disk, and already lowered swappiness.
At this point, my primary use case for ad blocking isn't the ad blocking itself, it is 1. the security of blocking ads, one of the worst vectors for attacks in the while and 2. the greatly reduced system resources my browser uses. The ad blocking itself is a further bonus.
I'd suggest again to try NoScript/Adblocking, disable hardware accel if you have it enabled, enable it if disabled.
If even there you have no success, I'd suggest you try something like EndeavorOS. Browsers have issues but that is not normal. You're not using Debian stable on the desktop, right?
Not going to defend particular implementations, but requirements? Those have definitely grown more than we give them credit.