KDE and XFCE are my favorites. I could probably daily drive XFCE but with higher resolution (4K) monitors, KDE tends to look a lot better, at least in my experience. I loathe Gnome and don't really understand why it has become the default DE on so many platforms.
Desktop linux is not being held back by system usage. If anything, we need to stop caring about that for a while and focus on quality of life / ergonomics. A unified clipboard (kb shortcuts as well as shared buffer), proper 4k rendering/scaling at a nuclear level, etc. This is my biggest gripe with Linux.
I would ditch macOS entirely if I could do the following: 1. complete keyboard remapping such that macos keybindings work everywhere (there are hacks for this, but they are hacks), 2. proper 4k rendering (no blurry stuff) with consistent titlebars and ui elements across all applications, 3. the ability to copy and paste text into or out of a terminal without fuss.
I don't care about X11 vs Wayland.
All this considered - and I can't believe I am about to say this - Windows 11 is not nearly as bad as I thought it would be and I kinda like it. Might need to go see a doctor I am starting to say funny things.
It sounds like you might care though, albeit not directly. That blurriness is often the result of attempting to use fractional scaling (i.e. not an integer multiple like 200%, 300%, etc). This is common on 4k displays as many are too small for 100% scaling to be readable but too big for 200%. You can do fractional scaling on X11 but Wayland typically does it better, and without the blurriness.
> This is common on 4k displays as many are too small for 100% scaling to be readable but too big for 200%.
This is why I went for 32" 2560x1440 when I got a larger monitor. Very similar pixel pitch to the 24" 1920x1080 units that I had before (one of which stands beside it in portrait, so 1080x1920). I didn't need more smoothness, just wanted more on screen without needing to make things smaller.
This is the entire reason I use Wayland on my personal machine (Ubuntu 22 with Gnome). The X11 fractional scaling is fine, but a bit blurry. Wayland makes it look crisp, like it should.
I'm in PopOS and have 150% scale on one monitor, with 100% on the other. Not seeing anything blurry, and pretty sure I'm on X11. Is that an older problem?
And I think the point was more that many of us don't care how it is fixed. If that takes a new application, so be it. If it can be done on the older one, also so be it.
X11 has been doing fine in the past few years. I have both my home and office computer running it with dual 4k displays and all appear to be swell.
I use Cinnamon as desktop because it happens to be the easiest to configure for 150% scaling. I think I've tried xfce and some others but ended up with mixed results.
Can the mouse scroll speed be configured in KDE, nowadays?
I'm flabbergasted that this is a question that should even be asked. Linux Desktop is so advanced and at the same time so lacking in stupidly simple matters such as this one, but here we are, look at how complex the answer marked as solution looks here: https://askubuntu.com/questions/285689/increase-mouse-wheel-...
(hint: the reasonable expectation, IMHO, would be to "open System Settings, go to Mouse Settings, and change a slider")
I've been in love with Linux Mint's Cinnamon and the very good balance between functionality and out-of-the-box experience. But no, the system settings don't allow to change the scroll speed. I won't complain, but it does confuse me...
> Can the mouse scroll speed be configured in KDE, nowadays?
Unfortunately KDE has gone full stupid in this regard. From the numerous bug reports, I think they decided that you can't please everyone so they should please nobody. Scroll speed is not deterministic to humans, who are not calculators. The intuitive thing is to scroll a consistent a amount per wheel tick. They used to do this but now it's a percentage of the content so sometimes it scrolls 14 lines and sometimes it scrolls 8.
I guess the behavior is different in Wayland because "x11 is dead" therefore only breaking changes make it into the x11 version. Some day they'll make the x11 version suck as bad as Wayland and then people will make the right choice. /s I'd be a bit less salty if I wasn't being asked to chose between two different sets of broken. I need LibreOffice to work so I'm using x11.
... yeah, what? If the content is really, really long, is it possible to scroll the mouse wheel one click and have it skip part of the content because "1%" (or whatever increment it is) is longer than what fits on the screen?
Clicking a button from the tool bar that shows a popup of some type (e.g. color chooser) doesn't display the popup. I'm not asking for an exotic behavior. No it's not NVIDIA's fault because I have no such hardware.
> Can the mouse scroll speed be configured in KDE, nowadays?
> (hint: the reasonable expectation, IMHO, would be to "open System Settings, go to Mouse Settings, and change a slider")
Yes. This is more or less what KDE does everywhere the upstream software stack supports it, and probably also what it already did the last time you tried it. This comes down to your distro's display server configuration.
On Xorg, mouse scroll speed is up to your X11 input driver, not your desktop environment. If the X11 input driver you're using for your mouse supports setting the scroll speed, that setting shows up in System Settings in your Plasma session. Libinput doesn't support this feature, so if you're on Xorg and you want it, don't use libinput as your mouse driver.
Under Wayland (which also uses libinput), the compositor mediates all input and so the KDE folks have worked around the issue and implemented the feature themselves.
At least in XFCE, that "reasonable expectation" has been fulfilled for as long as I can remember.
In KDE, I have no idea, because the last KDE that I have used was 3.5, which was the best desktop environment that I have ever used, including in comparison any Windows or MacOS version (while KDE 4.0 was the worst).
I tried it again last year some time and it was neither as configurable nor as stable as 3.5. Granted, maybe it's just rose-coloured glasses, but Trinity DE still seems to provide a better experience for me to this day.
Battery life and a more pleasant trackpad experience.
I know it's hard to tune a good trackpad profile with such varying hardware, but it sure.would be nice to get a MacBook like smoothness on a Linux DE.
And battery life ... Holy hell, Mac is so great and Windows is shitting the bed with their smart suspend.
I'm okay with fedora on my laptop, and I must say I've gotten used to Gnome. There is a tweak tool for gnome that allows for adjusting some things I think should be tweakable and discoverable by default.
After getting a ThinkPad, I went out on a quest to get trackpad kinetic scrolling working and I came to the conclusion that it will always suck on Linux. Some apps feature it (Firefox, certain Gtk apps). But the problem is that libinput does not implement it (for good reasons, AFAIK) which means it's left up to the GUI libraries to implement. Each. And. Every. Time. If you have a Qt app running next to a Gtk app, good luck. And this is, yet again, why Apple wins in UX. Their kinetic scrolling is implemented across the entire desktop. As it should be. You can't get a consistent feel when you let individual apps implement it with wildly different parameters.
> And battery life ... Holy hell, Mac is so great and Windows is shitting the bed with their smart suspend.
On Linux you can get close to what Mac does. I believe what the Macbook does is it has a hybrid hibernate/suspend. Suspend, for me, only lasts 2-4 hours and then my battery dies. I think there is another suspend setting that improves this, but there is actually a hybrid suspend that will hibernate after X hours of suspend. My laptop can be closed for a week or more and still have battery.
Yes, the issue with Linux is not Linux. It's the hardware. Mainly the processor. I'm amazed that Qualcomm, Broadcom or someone else cannot release a decent ARM based processor that can compete with the M1 in terms of performance and power consumption. I really hope there will come something in 2023.
True. Apple cannot be touched when it comes to laptops. My M2 Air is a marvelous machine. I do most of my dev via remote/ssh now and so the battery life on this machine is almost measured in days now. In Geekbench 5 it outperforms my new core i7-12700k on single core performance.
Look, I'm not an evangelist, but it solves so many annoying little problems with X11 that I never want to go back. You don't realize how much of a hacked-together, obsolete kludge that X11 is until you use an alternative.
The problem for me is that I do not entirely know what Wayland is solving for me. I am looking at it from the pragmatic viewpoint. I will use whatever technology is superior, but for the last few years Wayland has not been a 100% drop-in replacement and certain applications don't work. Heck, for the longest time Wayland would straight up not run on modern nVidia hardware.
I have tried it, in fact I just rolled out Fedora 37 w/ KDE Plasma on Wayland to my new i7 box ... but I have only used it for a few hours as it is more of a toy project than a daily dev machine.
Well, here's one unexpected benefit I discovered - vastly lower CPU use when playing videos. I don't know enough about display architecture to understand why, but on one of my more underpowered machines it makes the difference between being able to use streaming services, and not being able to.
Perhaps you can enable hardware video decoding on Firefox X11, by visiting about:config and setting media.ffmpeg.vaapi.enabled = true (unsure about media.ffmpeg.dmabuf-textures.enabled = true).
> complete keyboard remapping such that macos keybindings work everywhere
I also use a customized layout (mostly international English with shortcuts for German Umlauts and Y and Z remapped to eachother). It definitely works in most modern Linux desktops to use a "custom keyboard layout" like described here. There are many guides which are outdated on the topic. It's a hack and also not super simple to execute (you only need to do it once) but it is persistent and system-wide in all programs at least:
What he probably meant is that Command+C is copy, etc.
You can’t easily do that because every Linux program expects different things. The hack to make it work is to switch keyboard configs when you switch the application, I think. It’s far from ideal.
I'm using GNOME 43 right now on Wayland, and it is quite comfy. But this is mainly due to me heavily using extensions and tweaks to better the experience.
I agree that vanilla GNOME just isn't good enough. But especially for Wayland, GNOME is far superior to KDE in terms of stability and glitch-freeness.
I think the irony is that Linux could easily achieve this by simply copying the OS X use of Meta (Cmd/"Windows") key for the copy/paste instead of DEs stubbornly trying to ape the Windowsy Ctrl-key combination paradigm, which conflicts with the use of Ctrl by terminal applications. The time is long past when keyboards would ship without a Meta key. Another user mentioned Ctrl-Shift, but pressing three keys requires too much dexterity for a simple operation like copy (and you might accidentally Ctrl-C your script!).
I thought Meta is usually Alt? Or am I confusing it with Mod?
AFAIK Haiku has things like copy on Alt. So you'd have to press Alt+C to copy.
Personally, I would prefer to keep the "Windows"/Super key free for OS-wide shortcuts, e.g. tap=search, hold=drag window with mouse, Super+enter=terminal, Super+h=minimize (as in GNOME), Super+w = overview (as in KDE), etc. Using Alt for that and Super for commands would also work, but is somewhat counterintuitive to me as Super has the logo of an OS on it. On the other hand it is the same key order as used by macOS.
That's actually a really good idea. Although given everything in linux works with ctrl-c except terminal emulators, that's probably where the change needs to happen, ie using meta there.
> complete keyboard remapping such that macos keybindings work everywhere
I have some weird preferences in keybindings (alt-space=backspace etc), and Xkb could very easily handle everything I threw at it. If you are using X11, you should be able to get your preferred keybindings:
> complete keyboard remapping such that macos keybindings work everywhere
I have some weird preferences in keybindings (alt-space=backspace etc), and Xkb could very easily handle everything I threw at it. If you are using X11, you should be able to get what you want, too:
For such a frequently utilized key combo, three keys is too many. On macOS cmd-c and cmd-v work everywhere, too. Windows it is ctrl-c and ctrl-v. Linux it is... dependent on what application you are using and wildly frustrating. Sure, you can remap keys, but then you need to start to really be concerned with config management, backing that up, etc. Getting 'up and running' from 0-hero on macOS takes a few minutes for me, but with Linux it's a couple of days of tweaking at this point.
> Linux it is... dependent on what application you are using and wildly frustrating
What other application uses something else than CTRL+C for copying on Linux? Been using various Linux flavors for decades and the only application that requires a different shortcut than CTRL+C is terminals, as they already use CTRL+C for something else.
Otherwise I can't think of a single application using something else than CTRL+C.
I am spoiled on macos, where cmd is the primary modifier key. So cmd-c and cmd-v are for copy/paste, and ctrl-c still continues to work in a terminal. I understand the need for ctrl-shift-c in a linux term, where ctrl-c needs to remain for signaling processes. I just wish I could globally remap this. Some of the hacks exist for this (https://github.com/rbreaves/kinto) but it's a leaky abstraction and not fool proof.
Besides CTRL+SHIFT+C/CTRL+SHIFT+V to copy/paste in terminals, there are some other alternative mappings for common functions that I like to use sometimes and which work pretty widely:
Does no one use the X11 select/single button paste? Hell I like it so much I get irritated when I go back to windows and have to use (fake exasperated tone) four extra keystrokes to copy and paste. But jokes aside it is annoying as hell to use windows copy/paste after the sublime X11 experience. this is mainly because it takes 2 or three aborted paste efforts before I get it right.
Why did it not paste?? I know I selected it, go back and select.. still nothing, ahh yes I am on windows go back ctrl+c exasperated sigh, back again, ctrl+v exasperated sigh.
X11 has it's problems but primary select/middle button paste is one thing it got right. some others are point to focus, don't raise on click, tiling window managers and the compose key.
One class of programs. Terminal emulators. That's it (well and like Emacs but that's true on Mac too...). And a lot of them let you change it to C-c,C-v if you want too. You should change the interrupt behavior too but you can.
GNOME 3 isn't the current version anymore. GNOME 3.40 got released as GNOME 40 instead, which was succeeded by 41, 42 and now 43.
Personally I understand the criticism of GNOME 3, but I liked it a lot more than the GNOME 40 series. I've switched to KDE Plasma around this summer, I think from GNOME 42.
If you want completely monolithic & uniform experience built for your specific needs that no one else has on Linux but you, I don't think you'll be happy, ever.
> consistent titlebars and ui elements across all applications
You can get fairly close to this if you use gnome or kde specific distros.
But you're generally asking: please don't be open source. Please don't have a bazaar of ideas. Please build me one big cathedral. You ask here is antithetical to the purpose of whom you are asking. On most general purpose distributions, users probably ought to end up having multiple different UI elements.
> complete keyboard remapping such that macos keybindings work everywhere
I actually like this idea a lot, because it suggests a certain system-wide malleability layer that, at the moment, doesn't exist. Anywhere, on any system, at all. How would you go about making your Mac have consistent Windows keybindings, for example? I'd be interested to see how people thought we might tackle this, generally.
Folks could make a custom Linux distro that pre-configures each app to be Mac like. I think that's the best chance. But jeeze it seems like an unholy crusade to support a very specific niche, a niche not known for participating & giving back & relishing what we are & do. Awfully big bridge to build & it will forever have some rickety-ness to it.
You could use something like https://gitlab.com/interception/linux/tools to read certain Mac key-combos or what not and rewrite them. But what would you rewrite to? I don't have a good sheet of what it is you'd be asking for or wanting to just work.
My spitball idea for how we'd really fix this: I'd like each app to register with a dbus service all of the "actions"/commands it can do, and allow rebinding & activation of the actions over dbus. Maybe even actions actually are just dbus methods, but annotated somehow, to describe their hotkeys or to give them human friendly names? Anyhow, whatever the impl, there could be a central "hotkey manager" that could see all keyboard bindings & let us top-down manage them. There'd need to be some way for "Save" in Kate to be combined/grouped together with "Save" in Firefox, somehow, for this to be helpful. Managing this namespace of actions would be a terror of a problem, utterly absurd, in my view, which implies strongly the difficulty of the ask here I'm trying to respond to, but I actually think it'd be a pretty noble & cool effort. In part because of what it relates to:
System command apps. Tools like dmenu/alfred/albert/quicksilver are meant as general top down interfaces, are often scriptable/extensible/deeply configurable to allow fast access & control of a variety of actions. By recognizing keybindings as what they are: actions/commands, and suggesting that the "actions"/commands of an app get bubbled up to the system layer & get managed there, we just make these top-commanders more powerful. There's also an extreme parallel here to voice-agent systems, like Chrome Assistant, Alexa, Siri, where apps present actions & the system is in charge of taking user input and translating it into actuation; they too are directory systems of actions, rather than having each app in isolation.
> ability to copy and paste into or out of a terminal without fuss
Copy/paste just works for me? Not sure what the problem statement is here, and/or what terminals you've suffered under.
> Desktop linux is not being held back by system usage. If anything, we need to stop caring about that for a while and focus on quality of life / ergonomics.
100%. My main personal laptop is 4GB. I run sway which is low resource consumption, but in general resource consumption seems like a huge non-factor to me. In general, Linux isn't going to win by being more conservative. Pining about resource consumption is self-rewarding, self-gratifying: one feels zealous & virtuous, like you have the true cause amid a fallen world & are the path of the defender. But IMO it's mostly detracting & abusing the good & necessary & vital suffusion of creativity & possibility into the world. The scope of consumption is not that bad. And we have the important task of figuring out where to go still ahead of us: I'd rather be conservative once we have better ideas of what works, at any resource budget, & hone back down from there. Rather than forever dance around this maxima/minima we're on & tune for what we have.
I'm also unimpressed with this article in general. Showing the amount of memory mapped in seems incredibly uninteresting & indicative of nothing. Amount of data read has some correlation with start time but loosely: if Gnome is reading 1GB sequential (it's not but for example) while KDE is 512b reads randomly (it's not) but half the size, you'd probably still want to pick Gnome.
I’ll never understand the logic behind “high memory usage” as a metric for desktop environments.. If you had the hardware, would you rather see it used or see it dog slow trying to load UI components from disk? I can understand the idea of using memory used over time as a proxy for how featureful/bloated something is, but, blind monitoring of memory usage seems pointless to me. At best it can be a capacity metric - “if you have only 2 GiB of RAM on your machine, you’re better off with XFcE vs KDE, but on a machine with 8 GiB of RAM, would you not rather see it being used?
> I’ll never understand the logic behind “high memory usage” as a metric for desktop environments.. If you had the hardware, would you rather see it used or see it dog slow trying to load UI components from disk?
Linux already uses all the availabe RAM for disk cache. I would rather not an app do that (except in special cases like databases, which know better than the underlying file system what needs to be cached).
If a desktop environment is using a lot of memory, this means the same memory is not available to my other apps. So yeah, even on a 32GB machine, I do care about what each app uses.
As I type this, on my 32GB GNOME system, 21GB is currently in use. Of those gnome-shell uses 2,1GB + another 700MB for gjs and zoom (which I haven't used for a couple days, it just sits in my tray) uses 1,4GB. That's 12% for doing virtually nothing (there are window managers that easily take 5% of the resources of gnome shell and one of these days I'll be fed up enough to switch; and zoom does literally nothing right now except hog memory), and that's just looking at the two top memory users.
Whilst some components inherently use more memory than other viable alternatives (gnome-shell's JS runtime), gnome-shell generally has a lot more components and thus features than the more barebones window managers - and this should always be pointed out. Of course, if you are certain you will never need the extra functionality, then don't use Gnome. But generally, I don't think it makes sense to compare plain window managers with desktop environments. I think an interesting comparison would also be looking at quartz, the macOS window manager/desktop environment.
> gnome-shell generally has a lot more components and thus features than the more barebones window managers
My experience with gnome-shell is that it is in fact a huge massive bloat for what it provides. What features does it provide that more barebones window managers don't provide?
Whatever you have allocated to those VMs is not available for the host at all. QEMU-KVM will allocate "2GiB" of memory if the VM has 2GiB available.. Even if the linux guest is only using 2% of that actively.
That's not correct. With a VirtIO ballooning device [0], you can create faux memory allocations on the VMs and reclaim that allocated memory space back to host. This will increase the memory pressure on the guest, but will provide extra memory to the host.
a virtio balloon is intended to be used in "near OOM conditions" (due to performance implications, mostly)
a normal VM is not doing this.
it's also true that even if it was opportunistic: qemu's memory allocation on the host would take precedence over host filesystem caches which would be consequently evicted.
The reason people really like containers at the moment is partially due to this memory situation, you can "fit" more containers on a host due to better fitting of the memory.
> a virtio balloon is intended to be used in "near OOM conditions"
Of course. This is why you need to plan your VM deployments on your system carefully, even if it's just a desktop system.
> it's also true that even if it was opportunistic: qemu's memory allocation on the host would take precedence over host filesystem caches which would be consequently evicted.
A filesystem cache on the RAM is one of the most opportunistic mechanisms in Linux, and AFAICS, the buffers are the least important allocations on RAM, since they can be re-accessed from disk when necessary. So, the ballooning device enters at very high memory pressure scenarios with almost full swap area.
The truth is, you shouldn't be nearing to this point at any of your systems, running VMs or not.
Lastly, a bare bones Linux system w/o a graphical interface needs comparable memory to a biggish daemon, hence when used smartly, they're invisible sans the context switching load they induce on the system.
The way that qemu allocates memory is that generally it allocates 100% of the memory you've given to the VM (sans virtio balloon, which the VM doesn't really know about and will accidentally use for filesystem caches).
This makes qemu-kvm look like a program that uses a lot of memory, and linux (as a host) doesn't know what's happening inside and is not doing anything smart with free memory inside a VM.
This is true with many caveats, such as KSM (kernel-shared-memory), and the nature of virtio ballooning. This is the foundation of how it's working, which is what my parent comment was asking.
Well, i'm obviously in the minority (despite this kind of system programming being my day job for the past 30 years), but JIT'ed Garbage collected languages should never be part of the core system toolkit for actual technical reasons.
gjs (and the kde equivilant) should have been kept as end user system scripting languages, rather than having shipped parts of the DE written in them. Its a large reason why lxqt is significantly lighter than KDE.
It all comes down to the fact that no one has really solved the triumvirate of low latency, reasonable performance, and high collection rates, with a GC that aggressively returns unused ram back to the OS. Its a hard problem when the end application isn't known. For something like a generic control panel the (excluding event/loggging) the upper memory footprint can be bounded by the programmer, and the GC will be more aggressive as it approaches that limit, and the final result is a reasonable compromise. That said the hardcoded limit is likely 2X+ what would have been required by a non GC'ed environment.
But for a general anything goes environment, the GC's are tuned towards performance and largely unbounded GC regions. Which means in order to maintain reasonable performance they will just keep reserving address space from the OS, touching parts of it temporary and then avoiding compacting it because those operations are very expensive. Plus with mozjs/spidermonkey (another whole set of problems for gjs due to the lack of a stable API) if the GC+JIT are aggressive about memory minimization then there can be frequent noticeable application lags as the memory is freed then reacquired from the OS/etc.
Then because your desktop isn't under memory pressure, its not going to page out the fragmented bits of address space by default. Until it is under pressure, then the whole system goes laggy because while the LRU like algorithms the kernel is using to take the memory back work well, the GC is completely unaware of which bits of the address space it thinks it owns are suddenly going to respond much more slowly than it expects, and since its shotgunned (or worse is actually heap compressing everything) stuff all over the address space your taking a lot more pagein requests than one might expect. So the problem gets compounded.
The end result (like with java on the server) is that what benchmarks well in a trivial sandbox by itself on a machine, starts to have longer term negative consequences when its sharing a machine with another application (or dozen) also written under the assumption that address space is free and the OS will just take care of the problem.
So, javascript DE bindings are a good idea, but should have been reserved for windows macro recorder, or vba types of use cases rather than having parts of the default installed runtime env rewritten to use it.
> If a desktop environment is using a lot
> of memory, this means the same memory is
> not available to my other apps.
No, it doesn't mean that.
Your mental model of a how an OS allocates and manages memory is stuck in the 60s. Nowadays getting a handle on "real" memory use is increasingly illusive.
Probably the best metric is to ignore it altogether, and instead look at memory churn. I.e. how much the OS ends up having to evict some pages from memory to make space for others.
Even that is only meaningful if it's happening on an ongoing basis. I.e. if your DE allocates 1GB that it uses once (or never) having to evict those pages once generally isn't a problem.
It's only a problem once there isn't enough memory to go around for those pages that are in active use.
> No, it doesn't mean that. [...] It's only a problem once there isn't enough memory to go around for those pages that are in active use.
So it does mean that.
The fact that I said it plainly instead of mentioning private dirty pages gobbling up most of the available ram so any executable readonly pages need to be evicted / read all the time, thrashing the system much earlier than the OOM killer can kick in, doesn't mean that "if apps take up lots of memory, other apps have less" statement is untrue.
It's unclear exactly what you mean by the upthread comment, i.e. how "using a lot of memory" maps to e.g. Linux smaps statistics. But you do mention e.g. gnome-shell taking ~2GB.
My own gnome-shell has e.g. a 100MB allocation where >95MB is in "Private_Dirty", then a ~200MB allocation where all the columns except "Size" in the "/proc/$(pidof gnome-shell)/smaps" are zero.
People typically exclude the latter category when talking about "real" memory use.
But the former is the sort of thing that would make it into "real" memory use by most definitions, and show up in "RSS".
What I'm pointing out is that knowing that doesn't tell you anything about whether the memory use contributes to memory contention, which is the interesting question.
It's entirely possible (and quite common) that most/all of that was used as a one-off, and is either sitting there unused, or would be swapped out as part of expunging stale pages to disk.
So, you could have a process that at one point used 2GB of memory, and still hasn't free()'d it, but for the purposes of needing to fit a 7GB browser process into a total of 8GB of RAM won't contribute to contention. Even though it "uses" 2GB still (as in "RSS" etc.) it's only ever accessing 100MB of those 2GB. The kernel will happily page out 1.9GB to swap, and performance will be mostly unaffected.
To make any claim about whether an application uses a lot of memory on a modern OS you need to know a lot about its lifetime management of data.
For completness sake, pmap -XX $(pidof gnome-shell) shows 2G in Private_Dirty for me. I have since restarted Zoom so can't measure that..
Naively summing up the Size totals to around 9G. I assume that's the measurement you thought I was referencing.
I agree interpreting memory usage numbers on Linux is hard. I also do think modern apps (including here gnome-shell and associated processes) eat up memory like it's unlimited and free.
For a developer doing software builds frequently, the amount of available memory for caching disk access (buffer) can very much make a difference. The more the better.
> If you had the hardware, would you rather see it used or see it dog slow trying to load UI components from disk?
I am willing to bet, that while Xfce uses less memory than Gnome or KDE, Xfce is also more snappy. I bet smaller memory use and snappiness correlate, while your comment is suggesting an anti-correlation.
XFCE is a pretty snappy desktop interface, sure, however KDE is not less snappy than XFCE. I have used both more than >5 years, at the same time. KDE was on a much slower system and was not visibly slow w.r.t. XFCE. Now I'm using KDE everywhere.
The perceived slowness is due to window effects in KDE. Disable these and window appear before you lift your finger from your mouse or enter key.
Lastly, KDE's memory usage comes from the facilities it provides. Disable them if you don't need them, and save a lot of memory. KWin (window manager + compositor) uses ~180MB on a 4K screen, while file manager needs another 57. That's not much.
I did extensive testing of various desktop environments, and KDE was consistently the most sluggish of the bunch (tested across multiple devices), even after disabling all of the eye candy. The next worst is Gnome.
XFCE is snappy, but annoying due to the 1 pixel window border that you can't ever click on a high DPI display. Mate is almost as snappy and has better UX, so I use that.
I run KDE on an old Intel NUK. It’s got 4GB RAM, 32GB SSD and a decade old i3. KDE still flies on that system.
The only time I’ve found KDE not feel responsive was in Kubuntu. I have no idea what is different about Kubuntu vs other Linux distros but the slowness in KDE on Ubuntu is down to Ubuntu and not KDE.
When baloo finishes indexing its load on processor is practically zero. Akonadi is just a centralized account management framework which works on events coming from these accounts.
Baloo uses a lot of RAM depending on what you index, but neither Akkonadi, nor baloo takes custody of the processor(s) and degrade system performance.
I decided to give KDE a shot again after over a decade running minimal window managers, and while I really like the interface, it occasionally just freezes the entire system for multiple seconds at a time. Screen stops updating, mouse and keyboard don't do anything. This is on an i7 laptop with 16 GB of memory. This never happened under StumpWM.
plasmashell, which is written in QML (JS-based), uses hundreds of megabytes, and sometimes leaks memory or bugs out (as far as I can tell, the leaking and instability/crashing is an interaction between QML/JS, the JS engine, and KDE's C++ classes exposed to JS through bindings).
After a week of uptime, plasmashell is consuming 350MB memory (taken just now from my process manager). Considering this is my work machine and I never reboot except kernel updates, I've never seen it leak memory or crash tho. This is Debian Testing with KDE 5.26.4 w/ Frameworks 5.100 w/ Qt 5.15.
350MB is not very optimal of course. I may dig the code to see the reasons, tho.
XFCE4 had a similar memory leak ~2 years ago, which was triggered every time screen went into sleep and came back.
So, bugs & shenanigans happens, but I'm not still sure that KDE is "hell of a bloat" considering the things it provides at the speed it provides.
Checking coredumpctl, I've gotten two plasmashell segfaults in the past 4 months, and recall hangs and crashes happening more often in the past (especially under NVIDIA drivers). Additionally plasmashell has hung a few times and/or kwin terminated unexpectedly, though I think this is a Mesa bug (https://www.reddit.com/r/kde/comments/zbux1i/where_would_i_s..., https://gitlab.freedesktop.org/mesa/mesa/-/issues/7674) or amdgpu kernel bug (I've had multiple GPU driver hangs and unusable screens on sleep-wake, usually preceded by kwin dying in the same session).
I also checked coredumpctl, and no KDE related crashes logged there, hence validating my experience. Funnily, both my desktop systems run on NVIDIA cards w/ NVIDIA drivers, and both have uptime numbers in multiple months at minimum.
The personal desktop I used to use was regularly suspended to RAM and woke back, and my work system is on 7/24. Both are running KDE, and never misbehaved. They are not lightly used systems either. Also both installations use KDE extensively. Akonadi, Baloo, KIO, etc. are regularly used.
It seems, the user either has some big problems on his KDE installation or looking to shared memory usage, to begin with. Let's see what I have in the same list:
157M kwin_x11 (window manager)
392M plasmashell (desktop shell, widgets and other on screen tools)
175M krunner (macOS spotlight equivalent)
30M kded5 (settings and background daemon)
10M kactivitymanagerd (multiple activity contexts)
40M polkit-kde-authentication-agent-1 (KDE polkit bridge))
25M org_kde_powerdevil (energy settings)
19M kdeconnectd (use mobile phone as remote control)
5M xembedsniproxy (make tray icons work)
13M kaccess (accessibility keys?)
16M ksmserver (Plasma session manager)
41M plasma-browser-integration-host (Firefox can show desktop notifications)
10M kwalletd (password manager)
6M kglobalaccel5 (global keyboard combos)
5M kscreen_backend_launcher (changing screen orientation and resolution?)
2M kio_http_cache_cleaner (?)
32K xsettingsd (apply colour scheme to Gnome applications)
(Some lines omitted since they are not running on my system)
On uninstalling/disabling things:
- There's background services menu, which you can control which services to load.
- Baloo can be completely disabled by disabling file search.
- Akonadi is just "Online accounts". Add nothing, it consumes nothing.
- KRunner runs on plugins. You can disable any and every plugin, which will reduce its memory footprint too.
- You can uninstall any KDE feature package (at least in Debian), and these features will magically disappear from KDE interfaces.
Lastly, as another user noted, KDE and Qt has a self-resizing magic, and will try to minimize its memory footprint when the system has low memory. So, you can run KDE on a 4GB system and a 32GB system. KDE will allocate more in the second system, but will evict as much as possible as the memory load increases. Witnessed this in a professional project where we ran KDE on extremely limited thin clients and saw how it behaves.
> You can uninstall any KDE feature package (at least in Debian), and these features will magically disappear from KDE interfaces.
You didn't actually try this. It makes me annoyed that you think you can go on the internet and just claim something and expect it not be verified; what a bunch of nonsense.
In Debian 11.5, when one tries to uninstall any of kactivitymanagerd, polkit-kde, kaccess, then the package manager also takes out KDE as a whole.
There are core and optional components of KDE. kactivitymanagerd is a core component, so it reverse-depends plasma, that's true.
> You didn't actually try this.
No. I tried this and did more. Like using Debian for 15 years, KDE since 3.5.x days or leading a Debian derivative distro's development for 5 years, and deploying that derivative nation-wide from thin-clients to heavy metal.
> It makes me annoyed that you think you can go on the internet and just claim something and expect it not be verified; what a bunch of nonsense.
Same here.
Let's not project our prejudices to others unchecked, OK?
If you want to verify my claims, my webpage is there, with links to anything and everything I do.
The fact that you need to bet is exactly the problem, even when you are right. The article does 'blind monitoring of memory usage', which the comment you respond to points out suggest to the problematic, but in fact could also be benign and even beneficial.
So consider you are right and memory usage after boot correlates to being less snappy. That might be just a coincidence, with N being so small, and not a cause. Or maybe there is an interesting, unmentioned cause for both.
Or maybe the higher memory usage is due to features that most users actually want, for example better file indexing for search or whatever. It is not surprising to me that offering less features result in faster systems. In this case the comparison doesn't really make sense.
But it would also not surprise me if a system that uses a lot of memory is faster, if it uses that memory for caching for example.
So we don't know what 'free memory after boot' is a signal of and we don't really know if that is even important. Maybe it is to some who have 4gb or less memory. I have 64gb in my laptop, why should I care? (honest question, I can think of reasons).
If you are going to measure a signal, it is only going to be useful if you have a sound story about what the signal represents and why we should care about it. Otherwise its just a blind ranking game, kind of an e-sports competition, which will perversely incentivize useless investments (of time) and choices.
>I bet smaller memory use and snappiness correlate
You bet, but do you have any experience of any kind to back this up or is it just a vague hunch?
In my anecdotal experience, the snappiness of a DE is given almost exclusively by the compositor and amount of CPU cycles and disk I/O ops used by the DE, and less about the RAM used. Of major importance is the input lag of the compositor, with Wayland feeling far more snappier than X11 in most cases.
So I wish more of these tests would investigate CPU usage and disk I/O, rather than these RAM tests where users come to the premature "oh look, DE_1 at idle uses 200MB less RAM than DE_2, so it must be faster" conclusion, which couldn't be more false in the real world usage scenarios of how human users perceive snappiness. There's much more to this than just idle RAM usage.
So to contradict you, no, idle RAM usage does not in any way always directly correlate to perceived DE snappiness.
Why not try to measure snappiness directly? There's tons of research about this in UX land, which is not my field but I'm aware of it, defining precisely which kind of interactions at what latencies feel instantaneous and so an and so forth.
There is a lot of focus on measuring the system in linux land (cpu, memory, io, etc) from the perspective of a user (and by users), but its actually often the least informative. Developers and operators often benefit more from this level of measurement.
I suspect this is because its much easier to measure system utilization in a standardized way. Maybe this is a nice project to develop user friendly and standardized application level benchmarking for linux desktop (these are more readily available in web development).
'... I bet smaller memory use and snappiness correlate ...'
one way to probably look at this is as follows: optimising on memory usage f.e. by packing/encoding more information in a limited space, results in reduction of running time of programs. this is because, you reduce the amount of 'stuff' that needs to be moved around in memory (and programs are always doing that).
Hmm. I can see how my argument came across as anti-correlation. But, my intent was to highlight how unreliable “memory used” is as a metric for how good&usable a desktop environment is. Not doubting whether xfce could be snappier than KDE while using less resources. It could also be doing far less right?
> But, my intent was to highlight how unreliable “memory used” is as a metric for how good&usable a desktop environment is.
I don't think this is what TFA is about at all. It is merely comparing resource usage, I don't see it making any other statements. This is useful if you must decide what to use on a less powerful PC.
Yeah, but Xfce also has fewer features and less polish. e.g. I have had regular problems with connecting multiple monitors or from restarting a monitor etc. KDE also on the other hand has more useful applications pre-loaded (for minimalists it might be "bloat") but like others said, if you have the disk and memory why not get a more useable DE.
That used to be true, but since KDE4 hit the scene, KDE is typically just as snappy. That's probably not true for low low memory situations, but in general that's been my experience.
And my primary desktop is i3, if that tells you anything. OTOH, even when running KDE I tend to use small tools such as leafpad, konsole, vim, etc.
I'm typing this comment from a notebook with 8Gb RAM and it is barely enough for a modern desktop - my system is running out of RAM from time to time (because modern sites are JS heavy and Firefox with multiple tabs sometimes uses all available RAM). I use i3 which has a small memory footprint and assume that with KDE/Gnome I will be running out of RAM more often.
If you have >32 Gb RAM you could ignore DE/WM memory usage but it is a relatively privileged position to be able upgrade hardware faster than software appetites rise.
This. Perhaps HN crowd can afford machines with sizeable amounts of RAM but many people in my family still use old hardware. 2G RAM machines are pretty common from Windows 7 era. I often "upgrade" family members unwilling to buy a new computer onto lightweight Linux distros which don't need that much RAM.
This is inherent in GCed environments, and firefox/etc aren't tuned to enforce minimizing memory utilization because it saps perf.
So, you end up having to do it yourself. Run a tab discard plugin, and then keep an about:memory window open and click the minimize memory utilization button on a regular basis (probably a plugin/etc to do that, but I just trigger it manually every once in a while). And that is on machines that have 32-64G of ram.
I have brought a 8GB laptop (2 exclusive for the GPU) with the intent of upgrading the memory about an year ago. But I have been postponing it due to plain lack of need.
The only times it has lacked memory were when GHC autoconfigured into using 2 threads for each CPU core and when I install some memory hungry add-ons on Kerbal Space Program. I never noticed the memory limit on normal usage.
I'm a CS teacher that asks their students to run multiple Linux instances in VMs. RAM usage is the biggest issue in this case. If each VM takes 1.5 GB just to open a terminal, running multiple on a laptop with only 8GB of ram becomes difficult.
Yeah or just make a account on sdf.org. And why more then one GUI-linux-vm?...i personally would just connect (ssh or mosh) to sdf (NetBSD) for C, sh, python etc on sdf.org. BSD's and linux are 90% the same and the rest is not hard to learn.
Interesting how you'd infer that a simple user shell (remote) would tick all the boxes I'd associate with running a VM. Root access, networking (between the VMs), works offline, can (doesn't have to) use GUI programs.
I never say anything about tick "all boxes" but instead having your students install 100vm's you could just have setup a sandbox for them.
>works offline
Yeah that's not a thing when you learn CS.
>use GUI programs.
Yeah please no, it's a waste of time when learning something about server's...i think that's what you teach...well i hope, and if you really really want, tunnel X11-apps...an hell give your students a sandbox! Less inconsistency for you, less waste of time for your students.
Presumably because the desktop isn't what people are actually using their computers for, it's the applications on top of that.
So if you're using all of your CPU/RAM on the desktop, there's less left over for the stuff you bought the computer for
CPU is what I'm more worried about because if my DE causes my fans to run like they are jet engines, something isn't right. With KDE plasma, I basically got rid of panel plugins simply because adding each to the panel caused the CPU usage to go even higher (and I wasn't running any other heavy apps). I wish KDE would stop running after features and dedicate a release or two to just performance optimization - it is already an excellent featureful DE.
it is this kind of attitude, 'whatever, just let the system use it', that has led us to where 2 GB, and then 4 GB, and now 8 GB, all became 'no longer enough'. (and apparently, 16 GB is no longer "enough" either, as the apps or the entire system still can end up grinding to a halt or just crashing, even then.) lack of accountability, lack of control, developer carelessness, lead to memory usage and minimal/comfortable specs steadily increasing.
Err why should I want my UI eat up my RAM - it's a resource. I also don't buy this argument that its OK for modern OSes to eat up 3-4GB of RAM and that it doesn't matter and that it frees it if you need it...
For one thing, I would rather it be used for my applications, not the desktop environment.
But the real reason I love XFCE is that RAM use is just a small part of the puzzle of overall weight and resource use, which correlates with performance and responsiveness.
Another correlation, in my experience, is bugs and glitches. It makes sense to me, since more resource use means more code means more places for bugs to appear.
> Another correlation, in my experience, is bugs and glitches.
Most of us are slumming it with non-ECC memory I'd wager. In a very real sense, more memory usage corresponds to a larger target for a bit to get flipped by an stray cosmic particle collision.
I think the real reason why XFCE feels so stable and reliable, is that it is old. Old software that doesn't undergo frequent revolutions has a longer time to catch bugs, and doesn't introduce them at the same pace.
I agree. It's part of why I like it, is that it's feature complete, and rarely changes without my explicit request, something I appreciate in a workstation.
I recently saw an article about "maximum viable product" in software, and it really resonated with me.
I'd rather have the resources available for use in my applications, not being pointlessly chewed up by my desktop environment.
That's why I always use the Mate desktop: Small footprint, and does everything a desktop environment should: manage your environment & apps, have clean UX, and otherwise get the hell out of your way.
Perhaps in the hypothetical scenario that I had the hardware I wouldn't care. But I don't have the hardware, and I do care.
Separately, I'd think you'd want to weigh the memory usage against the utility of the features it enables. The memory is there so that it can be used, of course, but I have a certain memory budget, and I'd rather use that budget on things I care about. So now I can weigh this 1 GiB overhead of Gnome compared to something lightweight against the extra facilities the Gnome provides. I can't think of anything I miss in the lightweight option compared to Gnome, so I'll stick with my lightweight choices.
It depends on how you are using your desktop. More memory used for the DE means less memory for other stuff, like running VMs or development servers, compiling and dev tools, games or media applications, a browser, and even just caching disk files in memory.
> would you rather see it used or see it dog slow trying to load UI components from disk?
Are those the only two options?
In any case the memory is technically "used" when dog slow trying to load UI components from disk. It is just not used until that happens. Reminds me of those applications from the dialup days that would "speed up your internet" by loading every link on a page in the background in case you click one of them. In practice they often slowed down your internet by misusing your bandwidth.
Despite what you imply, and despite my own expectations, in actual usage I have observed that the greater the memory requirements of a desktop environment, the laggier it is likely to be in terms of responding to my input. Perhaps it is a mistake to assume that desktop environments' increased memory usage is devoted entirely to readying UI components that would otherwise be stored on disk.
I'd rather see it unused and not trying to load things from disk. Hundreds of megabytes of data in memory (or the need to reference this much) is a symptom that stuff's rotting..
The framebuffers should be the largest allocation required for a desktop environment.
Try running Stable-Diffusion, specifically the Automatic1111 version, and merging various checkpoints.
On my 32GB PC, after doing a few merges, the linux kernel is beginning to use many GBs of swap. Further attempts at merging checkpoints can result in OOM kicking in.
If I can save a GB or two, that can help get those final mergings accomplished. Otherwise, it's a case of restarting the application and continuing from there.
An alternative is to keep increasing the available swap space. Mine is at 10GB by default. The problem with that is you can end up waiting for the system to swap stuff in and out, and this can result in system pauses.
The other alternative of course is to buy more RAM. And here was me thinking 32GB was a decent amount ;)
It appears you did not read the article fully. They measure memory for the same reason, capacity:
> Overall if you are a user of KDE and Gnome you must be looking at the very least 4GB of RAM though modern web browser eat RAM for breakfast so 6GB or even 8GB of RAM sound a lot more desirable..
Edit: On a general note, the summary of your comment is like "measuring memory is bad – unless it's for <<this objective>>". Even if it's for <<this objective>> it still has to be measured in the first place. There is no need to argue against measuring as it then becomes circularly invalid.
I'd be a lot more interested in CPU usage, gnome is pretty terrible there and tend to either stutter, or worse - freeze for a while, quite often. It be interesting to see if the alternatives are as bad.
Which GNOME are you using and on which hardware? I don't remember those problems with my old laptop from 2006 and with the one I'm currently using from 2014. Crashes happens, maybe not every year. I'm working with an Intel i7 4xxx, 32 GB RAM, SSD, GNOME shell 3.36, Ubuntu 20.04, NVIDIA Quadro K1100M.
However I might have skipped some problems because I've been on GNOME Flashback for a while, from 2014 to 2020, while I waited for enough extensions to appear to recreated the saner (for me) environment before GNOME 3.
I'm on wayland, using whichever version of gnome that come with ubuntu 22.10. Hardware is a 3950x with a radeon 5700xt. That should be more than plenty for being able to render firefox and a couple of electron apps (slack, discord) without stutter. Usually it works fine, but sometimes there is stutter or downright freezes.
Note that it doesn't (currently) crash, the when a freeze happens it goes on for a while (30s?) and then go away.
Your hw is beefier than mine. It could be Wayland (I'm on X11) but it shouldn't because it is designed not to stutter. It should definitely not freeze. Graphic card driver?
Wayland as a protocol may be designed to be stutter-free, but the gnome compositor is very clearly not designed to be that. For example, it will happily request new frames or send new configuration events despite already being behind in processing what the client has produced so far. Which likely will make it fall even further behind, creating a bad circle.
I have an older Ivy-bridge laptop from 2012 (Thinkpad T430s, 16 GB RAM, SSD) and it does indeed shutter occasionally in animations - I assume it tries to upload some asset for the GPU and it misses the presentation time. The overall performance is actually pretty nice for such old hardware.
> If you had the hardware, would you rather see it used or see it dog slow trying to load UI components from disk?
Fallacy already starts at this assumption?
And then there is also the one more/bigger whatever I'd want to open and be happy when that is available right away.. memory efficiency also usually often correlates quite perfect with runtime and power efficiency and even responsiveness..
> but on a machine with 8 GiB of RAM, would you not rather see it being used?
Definitely no, wtf is that? :D At least not for the DE that should enable my work..
> I’ll never understand the logic behind “high memory usage” as a metric for desktop environments.
It's also far less trivial than looking at top output: at minimum you'd want error bars; i.e. repeat the measurement a few times.
At Phoronix KDEs project to reduce memory occupancy has been shown to pay off: in their numbers KDE competes not with Gnome, but with XFCE. If this benchmarks shows a totally different result, at least one of them have a problem.
I daily drive KDE, only see RAM usage that high while a browser is opened. Completely different distro though, so idk how comparable it is.
Also, it's running on a VM, and I assume under software rendering. It might not make a dent for others, but if you add fancy effects like KDE does, then I don't think this is a fair comparison.
There are more doubts I have with their methodology: "A VM will get a 16GB disk drive, [..] 128MB of VRAM". These seem byzantine and may trigger all sorts of behavior that isn't tested for or should be tested for.
Also, memory consumption is measured with `free`, which is even more evidence that not just the DE is measured, but the entire system. They only test Fedora Spins, but that is no assurance the rest of the config is identical between them.
Something about lies and statistics ;) If the proof of the pudding is in the eating, then simply use the DEs of your choice for a while and see what's optimal for you. Don't do silly things like measure memory consumption with `free` on a VM once, and draw general conclusions.
It's annoying that extensions like Blur My Shell can't fix the critical issues with the blurring because they're written in high-level JavaScript and can only use the bindings that GTK gives them.
So I'll have to deal without application blur, but at least the fact that I can blur some things (like the overview) is really nice.
I have other applications that need the memory. Just open slack in one tab and it eats up 4gb. I need the DE to get out of the way as much as possible.
I use lxqt which isn't very impressive but light enough. Awesomewm is really nice too for a tiling environment. The best lightweight was Enlightnment 17, I have no idea why they just dropped off the radar.
Wasn't Enlightnment the opposite of a lightweight environment. That is, if I remember correctly it included every single visual gimmick, it's only saving grace was the it was coded fairly efficiently.
Bodhi's fork of E17 (Moksha) is still pretty popular. I think the Moksha dev team are of the opinion that Enlightenment's development post-17 has been a mistake.
Depends on your application tho. On a Raspberry Pi that is a server and has a GUI only for occasional use these metrics might be more crucial. If you have a high powered laptop that you just use for browsing, you might not care so much about anything at all.
This. Or if you use for something an old laptop, there may be better options than KDE. But that's the cool thing with Linux Windows Manager. It's your choice.
The more interesting point of view is why buy new hardware, with higher specs, if the old one still works ? We only have enough resources to build new computers for the next 30 years, after that it's going to be impossible. We have millions of old machines lying around, maybe billions if you count smartphones. All unused. Why not use them instead of buying new ? We desperately need to aim for lower consumption if we want to use that. It's an environmental and a social imperative at this point.
while I don't mind on a desktop, if it's a not so recent smartphone and has 2G ram, memory usage matters. Gnome mobile and phosh are very good shells. Using the same gnome-settings on mobile is quite a feature, it being memory efficient makes for broader usage (in those devices)
Probably a remnant from a time when memory wasn't so dirt xheap as it is now. I still prefer to use i3 or sway but because they're simple in all the good ways.
Are most people having a web browser open 99% of the time?
I know a lot of people who open many many tabs and I believe it can certainly take up to 40-50% of their ram or more, especiallly on 2/4GB machines that are still ubiquitous for people who do not buy a new machine every 3 years.
Even if it's "bloated", I've found KDE to be a far more usable DE than Gnome. KDE provides you with more features than you need (or thought you might need), but everything's made discoverable, so you don't mind. Gnome relies on extensions to meet the bare minimum, and they break after each update. Once you add the extensions, I think Gnome ends up more bloated than KDE. Don't know about Xfce and the other stuff.
KDE is definitely not bloated in any way for the amount of features and capabilities it brings out of the box. Everything you need to get to work is already built in, no extra add-ons or extensions are needed, even for power users.
My Opensuse Tumbleweed KDE idles at ~700MB RAM usage so I have no idea what the author's Fedora does with KDE that it idles at 1.4GB RAM.
I mean, I definitely expect differences between distros, but 2x the RAM usage feels really strange. There's either a bug/memory leak, or several apps are cached or running in the background.
I feel for such comparisons, it would be also fair to list the "ps -aux" for each DE somewhere so that readers can dig a bit deeper into the results if they want and see what exaclty is running
I think OpenSUSE does something with KDE, because the author also mentions the Wayland session crashing every other time and I'm just using Plasma Wayland as my daily driver now as it has reached a point where it's perfectly usable, even with 2 monitors.
Exactly and that's my main beef with these DE comparisons. DEs don't exist in a vacuum but depend on which bistro they run on, and how well they're integrated in that particular distro as this affects stability and resources consumption.
I suspect if we were to tests all these DEs on Arch or OpenSUSE, some of these results would look completely different, at least for the major "gass-guzzers" like GNOME and KDE, so IMHO, the jury is still out on them.
Still, I appreciate that the author has gone through this effort for these tests.
Agreed. I was a GNOME user for close to 20 years, but the version 40 did it for me - I migrated to KDE and haven't looked back. It's slightly less pretty than GNOME and has a few usability rough edges, but overall it's just a much nicer experience.
"Bare minimum" is certainly an exaggeration, so I apologise. Gnome is indeed perfectly usable out of the box. And KDE is not "far more usable", it's just more convenient in lots of small ways.
I haven't used it for a while, but I found that for certain audio controls, or to allow certain window layouts, I've needed to install extensions. I also found the information it provides about wifi to be less than adequate, and I'd imagine that you'd need to install extensions to fix that. KDE provides a running graph of traffic to/from a Wifi endpoint, and other information that you might not have considered.
Yeah I'd agree that having the ability to customise what's displayed in the top bar could be useful, and the audio source selection could definitely be improved. It only takes a few clicks to get to settings but if you do it a few times a day it'd become a chore.
I do most of my audio controls with pavucontrol (which is not a gnome extension) regardless of the DE and I don't think that running graphs to/from a wifi endpoint is something many people do on a regular basis.
And if I want graph for any kind of metrics I usually use netdata.
I think gnome defaults works for anyone that subscribe to their desktop metaphor. I like it, some don't, but I don't think that is because it is not complete. It is certainly less configurable out of the box than KDE though and I understand why some people would prefer KDE.
I agree the graph thing is not very a "basic" example. I have another one :
Someone asked me recently how to change the luminosity on GNOME. I looked at the keyboard but saw no sun icon so I turned to the system tray... No icon either. I, in turn, asked someone else
"Oh, just click the volume icon"
Turns out there is a whole panel of unrelated settings hidden you can see either clicking on the logout button or volume button... It's actually the same button. I am generally against hiding settings in menus and submenus but if you're going to, can it at least be a universal cog icon?
Seems more like the product of an artist-architect willing to sacrifice everyday usability to commit to his "vision". In our case the system tray must look symmetric and bare. I do want tools not get in my way but not out of the picture.
What you call the volume / logout button is a complete area per se. It is where also lies the battery and network indicators.
I think it is more intuitive for the people used to android (not sure about iOS, can't be too sure it has been to much time I have been using it). On android most quick configuration parameters such as luminosity is accessed by sliding the upper part of the screen which do not have a cog icon either and a similar appearance to that gnome systrayish area.
My biggest critics is that those controls aren't afaik directly accessible with a keyboard shortcut ( super + something) while said controls can, like many things in gnome, perfectly be navigated with a keyboard.
I haven't seen for age laptops and even standalone keyboards that do not have either dedicated buttons or function keys to manage luminosity though and can't think of any good reason to use the mouse to do that.
For my wife, coming from Windows, it was quite a few things. With KDE, there were only small changes in the settings ui (like alt+shift to switch language, which gnome doesnt allow, needs a plugin)
Use ReactOS or buy a license from Microsoft for Windows and accept it drawbacks. Or provide a well introduction into Linux and KDE and allow the user to excel :)
GNOME is not an alternative to Windows. Linux is an operating-system for its own, building upon UNIX and Plan9. Especially Systemd is giving Linux a very own foundation. That said, we don't apply Emacs shortcuts on VIM because it degrades the provided features and make usage even harder. Humans are pretty good in adapting - one of our best features. An analogy - don't try to ride bicycle like a car. It won't work.
PS: Best results with GNOME are with novice users. GNOME tends to be very keyboard centric and allows focus on your applications. These novice users don't try to apply usage patterns learned on Windows.
>Best results with GNOME are with novice users. GNOME tends to be very keyboard centric and allows focus on your applications. These novice users don't try to apply usage patterns learned on Windows.
Just out of curiosity, how many of these novices do you expect there to be? Also if you only target this group of people who have never used Windows, how do you expect to serve the massive number of people who started with Windows and made their way to Linux? Or do we not matter compared to that other group?
Most kids start their computing experience with tablets and mobile phones. Gnome has a very similar desktop metaphor being made to work as well on regular computers and tablets. My 9 and 12y/old daughters have never used windows and they are autonomous on the Fedora + Gnome laptop I lend them when needed for their homework or to watch streaming services.
Yes. A lot of people don't work much with computers. It is just our bubble which assumes that. In my case it aren't kids but older persons which succeed well with GNOME (Fedora). I don't have to care a lot and their able to install major upgrades with one prompt. And I'm happy with GNOME as power-user because of the keyboard centric usage.
PS: Most computer courses fail to teach usage of computers. We need to read the computers output, understand it and then input. And for interaction a very "high-level" of the concept how a program works is needed. Instead most computer courses merely teach "and now we click on the blue icon" which leave helpless users. Interestingly TUIs seem to a magnitude better then GUIs, only text and focus on the task. A lean GUI helps somewhat. And websites? Horrible. HN is a seldom exception, just text from left-to-right, top-to-bottom.
Just Perfection. GNOME Animation speed is crazy slow. It takes almost a second to enter the overview. Just Perfection's Fastest option is also a bit too slow, but disabling animation entirely is too jarring
Don't know why most of the distros default to gnome. It's an absolute nightmare for usability. Without extensions even bare minimum things can't be done. If you are a windows user switching to Linux, should definitely stay away from this steaming mess of a wannabe osx clone.
People forget that the entire early MacOs, GEOS, windows DE's used to run in a matter of a couple hundred K.
The entire system requirements of a windows 2k PC were 64M of ram, and yes it could display a background picture on a HD+ level display and pretty much do everything that is possible on a modern PC that your average user is doing. The darn thing was 32-bit, meaning that the apps could never allocate more than 2G (or 3G on a tuned machine) of address space and it was enough, even for video editing/etc.
Back when FF went 64-bit/multi process I groaned because having a fixed 2G limit, which seemed terrible at the time, created a nice upper bound on how much CPU+RAM it would consume so the developers assured it ran reasonably well with those kinds of system resources. Now the sky's the limit, and if you dare limit its resource utilization via cgroups/etc it won't work 1/2 the time.
And of course its your fault if you don't have a 16 core, 32G PC. To run a text editor and a chat application.
> The entire system requirements of a windows 2k PC were 64M of ram,
It was 128MB (and XP doubled that. Which was on of the reasons I abandoned Windows XP for Linux)
> and yes it could display a background picture on a HD+ level display and pretty much do everything that is possible on a modern PC that your average user is doing.
Very very very few people had HD displays back then. I had a 19” display that could do 1024×768 and I was the envy of a few of my friends. Higher resolutions came a little later.
Also remember that icons were 8bit and 64x64 pixels. Desktops didn’t support widgets (unless you turned on ActiveDesktop but that ran like crap on even powerful machines of that era) and so on and so forth.
And that’s just the desktop. Everything was less secure back then. Telnet and rcp were still the norm. SSLv3 was still common. Keys were a lower bit length. There wasn’t as much bounds checks or any of the other stuff that have made our platforms more secure.
Sure there’s a lot of waste too. Like writing applications in browser sandboxes rather than native code. But a lot of the reason systems back then were leaner was because they needed to be rather than because modern systems are wasteful. It’s like the whole Y2K problem: why store a the year as two digits? Because memory was constrained and now it isn’t.
Source: was a software developer for Windows 2000, now I write software for Linux.
And, for what it's worth, as I recall 1600x1200 CRT monitors were high end but not unusual at all in the mid-2000s, and Win2000 and XP drove plenty of them in their time.
It ran like crap in 64M, 256M was about right IIRC, although I had some servers with multiple GB (you really wanted to keep it under 512M for a desktop though because that kept PAE off and the kernel would then keep everything mapped, which sped it up a lot IIRC).
In the USA you were seeing people with 1024x768? I suspect a lot of your friends weren't installing graphics card drivers. I can't tell you how many PC's I saw stuck at 1024x768x16 (or maybe it was x256) whatever the max VESA res that windows ME/etc supported. I would go to peoples houses, install their video card drivers and set their monitors at 1280x1024, or frequently higher for people with bigger monitors and then tell them to upgrade to win2k. And then they would be shocked at how much smoother their PC would run. I also would adjust the menushow delay and a couple other things. I got a reputation for making peoples computers way faster, but also sometimes setting the resolution to high... laugh
Maybe I was running 1280x1024 then. Feels like a whole other life time ago so might be getting some of the numbers mixed up on my own set up. But there definitely were lots of people still running 1024x786 on 15” screens. Albeit they either did so for eye comfort reasons or because they were still on old 9x systems.
This was England, not USA. Not even London, so possibly a couple of years behind the curve.
> The entire system requirements of a windows 2k PC were 64M of ram,
It was 128MB (and XP doubled that. Which was on of the reasons I abandoned Windows XP for Linux)
I tried 2000 on a 486 with 64MB. It installed and ran fine without crashing, except for the unholy slowness. I suspect the CPU and ancient HD were bigger culprits than the 64MB though.
Very very very few people had HD displays back then. I had a 19” display that could do 1024×768 and I was the envy of a few of my friends. Higher resolutions came a little later.
1024x768 was doable on 15" and some higher refresh 14" monitors in the mid 90s, but kinda hard on the eyes.
by 1997 though I had a coworker running 1600x1200 on a 19" or 21" with NT4 - that took some squinting. I suspect he liked people not being able to see what he was up to unless they got real close.
1280x1024 was pretty good on a 17" or 19" back then - 1024x768 felt a bit lores on a 19".
> I tried 2000 on a 486 with 64MB. It installed and ran fine without crashing, except for the unholy slowness. I suspect the CPU and ancient HD were bigger culprits than the 64MB though.
It would install because the minimum requirements were 32MB. But the published recommended requirements were 128MB.
> 1280x1024 was pretty good on a 17" or 19" back then - 1024x768 felt a bit lores on a 19".
Yeah, in hindsight I think i was running 1280x1024
I agree with most of what you have to say, yet we have to be careful with comparisons like:
> People forget that the entire early MacOs, GEOS, windows DE's used to run in a matter of a couple hundred K.
The simplest example is to consider graphics. Early systems had limited resolution, limited colour depth, expected applications to repaint regions when necessary, and only sometimes offered primitive forms of graphics acceleration. Most people will want to see improvements on all of those fronts, yet has to be paid in memory use.
Should the jump be as big as it has been? Certainly not. On the other hand, using the earliest of GUI's as an example is misleading since the improvements I mentioned will require about 1,000 times the memory.
Which is why I pointed out Win2k, which can drive multiple 32-bit displays with at least HD resolution. I know this because I used to do it with multiples of those 21" trinitron (woot the long dead physical dell outlet store) monitors that were 2048x1535 connected to my work computer.
Sure today you can do 4k, but a not insignificant number of people are still running 1080p displays, which are actually fewer pixels than I had 20+ years ago.
What you get with a modern machine is aero peak functionality, and transparency, but the ram usage doesn't magically go down if you turn it off on something like win7 where the option existed. Also, its funny that we could realtime repaint windows on a 486 with barely noticeable (if at all) lag, but we can't do it for a few dozen applications on a 5Ghz machine with dozen cores if there happens to be a requirement to repaint part of a window under a transparent one? Even so, if a full scale image (not that its even required for a peek type function) of a occluded window is needed, whats the storage requirements? A few dozen MB per application? But, that's not really how these applications work, when you peek/zoom/etc a window its getting redrawn in realtime. Otherwise you would see stale versions of windows that aren't visible, and if you notice that isn't what happens. Depending on settings (at least in KDE) there is a couple accuracy options that can be tuned which affect the repaint lag during zoom/peek/etc functions.
I can’t think of any graphics cards that could do 2048x1535 in 2000. Maybe several years later…much later…but even then you definitely wouldn’t be running that on a 21” screen because everything would have been tiny and scaling wasn’t a thing.
I mean, how many times can you be wrong in the same two sentences. I think people on this board have mentioned monitor resolutions went backwards for a decade...
I'm fairly certain (but not entirely sure) it was a dell p1110, which I picked up about a dozen of for the company I was working for in late '98 or maybe '99. Definitely a 21" flat panel trinitron with a dell brand. There are various comments online about people buying the GDM-F520/etc and similar monitors based on the same tube in the late 1990's. And the graphics card wasn't the problem (although I can't remember if it was a pair of matrox'es or nividia's driving them). The problem was that at 2k it didn't like to remain converged over the entire screen all day long, so about once a day I would adjust the edge convergence. Which is probably why the recommended res (vs the max) was a notch lower, but IIRC it still had convergence issues.
And you would apparently be shocked to know even win3.1 could do font scaling/etc, although I ran native 1:1 in NT4 then w2k because back then my eyes were good enough to notice the color fringing caused by the monitor going out of convergence.
You might also be surprised that i have a 17" LG I purchased in ~97 (might have been 96) that I ran on my personal computer at 1600x1200 still sitting in my attic (would go check the date code, but lazy) . But, that's just a boring monitor, my boss in ~94 had an IIRC Ikegami (23/25" don't remember, but it was "huge" when i had a 15") that was doing 2k (or maybe a 19xx, it was weird) attached to some funky graphic card. The wikipedia page mentions there were trinitrons in the 1980's doing 2kx2k. So, maybe you didn't have one in your house but there were plenty in professional settings which were concerned about graphics accuracy, or having a lot of screen area. I was in the latter, we were attaching multiple heads (and not just my cheazy second herc for turbo debugger) to our customers in the mid 1990s with early versions of NT.
(Random link about what I think is the same tube FD Trinitron® 0.22 Dot Pitch 2048 x 1536 @ 85Hz CRT https://forums.anandtech.com/threads/dell-21-monitor-p1100-i... I remember having the conversation with a buddy (who worked at a film shop you have probably heard of) and had the sony branded ones where he complained about the color accuracy, but said they didn't have convergence problems, so I blamed it on dell getting the lower quality ones.).
> I mean, how many times can you be wrong in the same two sentences.
If you believe I’m wrong then a simple citation is enough. You don’t need to sprinkle your rebuttals with arseholery
> And you would apparently be shocked to know even win3.1 could do font scaling/etc,
Font scaling as been around for decades. That’s clearly not when I mean when I was taking about desktop scaling.
And I still think the max resolution you described would be uncomfortably small for most people on a 21” monitor. These days operating systems scale widgets to work around that problem. Back then there was no such thing.
> You might also be surprised that
I wouldn’t, and again, no need to be an arse with your rebuttals. we are all grown ups so show some maturity
> i have a 17" LG I purchased in ~97 (might have been 96) that I ran on my personal computer at 1600x1200 still sitting in my attic
I do too. Never had a graphics card powerful enough to output that though and I had a pretty decent graphics card for that era. Or so I thought.
Just goes to show there was a lot of hardware out there I didn’t get to play with.
You make some good points. Pity about the way you made the.
> On the other hand, using the earliest of GUI's as an example is misleading since the improvements I mentioned will require about 1,000 times the memory.
800x600px @32b was my favorite resolution & depth for a long time. 800x600x32/8 (800 pixels x 600 pixels x 32bits per pixel / 8 bits per byte) = ~1920000 bytes. Call it just under 2MiB. But 4K? 3840 x 2160 x 32b / 8 = 33177600 ... or just around 32MiB... around 16x the size. No, it does not require 1000 times the memory. It requires only about 16x the memory.
Where memory is being lost is in massive amounts of inefficient languages copying and duplicating huge swaths of images (and tons of other datas too) that will never or rarely be used. Want a transparency on the window? Well now you have your image, the background image, and the real blended image. Want to show bunches of pretty little custom images? Each of those comes in, and some of them aren't even visible! But they still need to be loaded up.
And that doesn't even talk about the slowness of the application. Tons of IPC communication going on to enforce sandboxing, instead of just directly doing whatever it is that needs to be done. In-process multi-threading models with inefficient languages exacerbate it. Inefficient design with huge layers of abstractions to support a fifty bajillion different ideas for how a user might want those pictures shown (oh, you want big icons but another user wants little pictures; you want light colors, another wants dark colors, and a third wants custom colors...; different fonts, different etc...).
And that's just looking at images. What about datas? Gotta hold all of those APIs and telemetry somewhere. Gotta cache them so you don't have to request it again. Gotta hold it in memory because disks are slow. Gotta keep all of the metadata too. Might as well put it into a sqlite3 database and make it easier to manage! Or maybe the developer can just store it all in a linked list and look it up as needed... And what happens when you run out of memory? Goodness that's annoying and difficult, especially in a terrible language, so just don't worry about it. Just allocate something new and let the garbage collector... collect your garbage that you've strewn all across the user's machine. Who cares if it's "slow" as long as your application doesn't crash? Need to render a new image? Best to render it from a template image, so now you're holding even more images and also allocating new ones every time you need to render stuff. Except that it's all done in CSS by the kids these days, so the template image isn't even an image... it's text... to be rendered by the CPU. Yes, cool. No, not fast.
All of that ends up being on top of terrible "cross-platform" frameworks to make it cheaper (to the developer) to build the damned software.
Why are computers so slow and bloated these days, anyway? It's certainly not because computers are slow and bloated.
Ah, some good food for thought for those new Linux users that have no understanding of software engineering and caching.
Because most engineers know that memory usage of different pieces of software does not mean very much. It is very possible that that a DE boots into 100 MB, and then it's usage grows to absurd level while it is being used due to poor coding or architectural practices, while another DE might use 500 MB at boot, and then not increase very much because all its core logic is contained within this amount of memory.
Many enthusiastic people entering the Linux world, the type that run neofetch and post it on Reddit every time they jump to a new distro, are always really interested in the RAM usage figures, as if they were a real indicator of "bloat".
While I immensely appreciate how clearly it defines the methodology of this comparison, posts like this are a good way of keeping that dumb practice of running `free` after boot and drawing conclusions alive.
I did _not_ just run `free` after boot, maybe you need to reread the methodology.
As for RAM leaking - I cannot and I will not account for that: such a test could take weeks to carry out. In my experience modern Linux DEs do not leak that much - people normally do not interact with their DE (yeah, I'm not joking) anyways - you boot into something, e.g. Gnome and you run your applications (e.g. web browser, spreadsheet, terminal emulator, media player, instant messenger, etc. etc. etc.) and your DE is running happily in background.
You interact with your DE whenever you create a window, switch to another, press the Super button, right click on the desktop, get a popup notification, hit your media keys, and in a decreasing way the more you move towards the Window Manager end of the spectrum.
So of course I would expect fluxbox to use less memory than KDE, because fluxbox doesn't do anything if I press the volume keys (please correct me if I'm wrong, I haven't used it in a decade)
Does that mean that fluxbox and all of its associated tools are going to be as snappy during real usage? You can't certainly know by just analysing the RSS and shared memory size of the fluxbox process.
> You interact with your DE whenever you create a window, switch to another, press the Super button, right click on the desktop, get a popup notification, hit your media keys, and in a decreasing way the more you move towards the Window Manager end of the spectrum.
Most of this is handled by a window manager. Again, most users launch their favourite set of applications at the start of the day and run them until (if) they log off. And the average programmer is no different: a web browser, IDE, terminal emulator, Telegram and Slack all running non-stop.
It's not so clear cut. Your kwin process uses the KDE libraries and facilities, some of which are used by Dolphin as well. It's a mesh of interconnected and communicating processes via IPC, unix sockets or shared libraries, not standalone islands. Firefox might show a native printing dialog that's exposed in some core code loaded when you logged in.
In fact the line between DE and WM on a bigger desktop gets blurrier, especially in the Wayland era when your WM is actually also your compositor, so handles all your windows and their positioning, input dispatch, permissions, etc.
Fluxbox is just about the snappiest window manager there is. On Arch, I actually used its precursor, blackbox.
It has better DPI scaling than most window managers today, and had zero functionality for notifications, volume keys, Super buttons, right clicking on the desktop, etc. it didn't even have alt+tab. Didn't even include a compositor for transparent windows.
I got to add all those myself and was happy with it for a while. Dunst (the notification daemon I used) allows you to change the content of a notification while it is showing, so I used that for displaying the current volume when I used the volume keys.
But after trying Fedora Silverblue I'm basically in love with GNOME Wayland now.
What things do you like the most about GNOME Wayland? Are those things specific to Silverblue? (I've been curious about Silverblue too for a while but haven't tried it yet.)
For one, the DPI scaling is perfect. Everything is crisp, clean vectors. There are no text scaling issues, no hinting issues, no antialiasing issues. It is simply a pleasure to use GNOME on a 4K display.
For two, the trackpad gesture support is perfect. Everything feels super natural and connected to your movement, and the spring animations are super nice. My only gripe is that there's no elastic overscroll in GTK apps, but eh. Firefox has it, and that's the majority of my computer time.
For three, somehow everything manages to have a mostly consistent theme despite GNOME not supporting server-side decorations. I was not expecting this.
I rebased to Kinoite to try KDE Plasma and couldn't get past the terrible scaling issues, the extremely outdated design, and the total lack of consistency. And forget about trackpad gestures. In fact, they don't even let you change your trackpad's cursor speed! Hope you like the default!
Most of what I like about GNOME is not Silverblue-specific. Although, being able to install Nvidia drivers with two commands and have them work flawlessly is almost certainly Silverblue-specific.
Exactly. Please, pretty please, use all my memory for even trivial tasks. Just make my system snappy and carry me while under load. I've got 16 cores, please use them. Use them a little bit smartly since I pay for power. Perhaps, don't write as much as you could since my SSD is the first part to die.
The most memorable increase in happiness for me was when Ubuntu Gnome went fully high refresh rate. Things just felt faster after that. I'm pretty sure that is nowhere near the same ballpark as the HDD-SSD transition of a decade ago. And still, it's feeling that counts. (I do look back fondly on the whole optimizing Windows XP installer to 42MB of a few decades hence, but that's not where productive people go.)
I would say most engineers know high memory usage without a clear cause correlates strongly with poorly written software, it's likely to be slow and shit.
As a long time Linux user I can appreciate what you are trying to communicate here - but it comes off as kind of abrasive and harsh. Those kids playing with their machines, running neofetch, 'ricing' their desktop - they are the future of this ecosystem. You can either alienate them or try to shepherd them, I think we need to try the latter.
I remember sysadms in the company I was working for 20+ years ago. They've been managing HPUX and Solaris systems forever and we suddenly added a bunch of Linux servers for an application delivered by my team. They gave me a call one day. "Those Linux machines were at 100% RAM so we doubled it. They are still at 100% RAM. What's your software doing?" I guessed that the system management software on HPUX and Solaris was clearer about the distinction between memory used by applications and memory used by the file system cache, because I remember that file system cache was a thing even in HPUX and all the other UNIX flavors I used at university ten years before.
Indeed. Free RAM is useless RAM and a waste of actual money. If everything I use is magically always in RAM with no swapping ever, I don't care if I'm constantly at 100% usage. That's what I bought it for.
We're lucky we are unable (AFAICT) to have real stats on L2 and L3 cache recency utilisation.
I have been using xfce for years now.
I would recommend people try it, no matter what your machine specs (I have a good laptop).
It is just, really, really, fast. I mean, there is a bit of latency opening Firefox or TB from cold, but everything else is just... Instant.
Tabbing windows, instant response when clicking buttons. It is night and day when comparing to windows. I use it all day, rarely reboot, and it never lags. My phone lags way more than desktop even. It is a desktop that gets out of your way :)
The downside is it makes using MS windows very frustrating (I keep thinking I must have missed clicked due to delays), but I don't have to use windows much thankfully.
I used XFCE for a few years. My biggest issue with it was that my work machine is a laptop that I have configured as dual-screen with an external monitor. I have a need to dock and undock several times a day as I alternate between working at my desk, go to meetings, work from somewhere else in the building, etc. Maybe it's better now, but back then XFCE was terrible at figuring out where windows are supposed to go as displays came and went. GNOME (hell, even GNOME 2) to its credit, always handled this flawlessly. KDE was iffy for a long time but now it's fine.
+1. Been using Xubuntu LTS for my desktops and laptops since 10.04 (I now see that's after 2010). My experience concours with yours. Xfce is excellent for me, hope it stays this way. I see my kids use Windows. Some things in modern Windows strike me as insane nowdays (been a Windows user too, since Windows 3). Hope I'm never forced in a position where I have to change away from Xfce.
I'm actually less interested in how much RAM the desktop uses in normal usage, and more interested in how gracefully it handles curveballs. For instance, some time ago I accidentally pointed a file manager at a directory containing a massive JPEG image (I think it was a photo of decapped chip, or something of that nature). The file manager immediately decided to generate a thumbnail, ate all the physical RAM, and triggered a swapstorm that made it next to impossible to find and kill the offending process. It took best part of five minutes to regain control of the computer.
And worse, if Dolphin misinterprets a pseudo-random random binary file as a Targa image, even the tinest of files can be interpreted as an image with millions of pixels and gigabytes in size (even though only the first few pixels are actually present in the file). And malicious binary files can be small but decompress to gigabytes of pixel data as well. So a source filesize limit is insufficient to prevent pathological files from eating RAM.
In the Xorg version of gnome-shell, it was possible to crash the user session by just requesting a drawable surface that's _too big_. I managed to do so with a textbox from seemingly safe python code.
Oh boy I remember when Gnome's file manager ate all my ram trying to generate thumbnails for a folder with millions of images.
And when Dolphin crashed and refused to work again when the nfs shares that were mounted got unexpectedly disconnected from the network. I had to unmount everything from the terminal before Dolphin decided to work again.
In my experience Linux DEs are notoriously bad at handling edge cases like this. Probably one of the more frustrating parts of the Linux desktop experience. It’s not a constant pain but on the odd occasion that you hit these edge cases it tends to really suck.
I still think that the desktops eat a huge amount of resources. Win98 ran correcly with 32mb and you had more or less the same features of nowadays desktops.
Indeed. I recently took out my Pentium 133 MHz running Windows 98 and Office 97. Same, if not more pleasant experience, than modern Windows machines. Very little has changed in principle besides animations and advertisement.
IceWM takes less than 20MB of RAM - it represent Windows 98 quite well.
Full featured DEs offer features far exceeding what Windows 98 had: applets, advanced window management, 24bit graphics (Windows 98 icons were mostly 8bit) + transparency, a ton of convenient APIs. Coding in pure Win32 is quite difficult.
not sure what exactly you mean with applets and especially advanced window management
IceWM is far more advanced/usable than Windows 98 wrt window management (and better than modern Windows, since things like the TaskBar have regressed heavily).
I remember a default Delphi application with 0 lines of actual code showing nothing but an empty screen compiled into something like 300KB circa 2000 all thanks to VCL while its pure C Win32 counterpart compiled into a few kilobytes.
If you use an alternative set of libs, such as KOL&MCK, you get fully featured GUI programs in tens of KB range (about 50KB for an empty window with a button, IIRC) while having all the benefits of the coolest RAD which Delphi/Lazarus provide.
...And then you UPX it to shave some more bits. ;)
IIRC it included the runtime libraries. In the case of VB you had to install them in addition to your software (and I think they were bigger).
Today you can download the .NET runtime quickly and semi automatically, but back in the day you had to make sure the user had them, so you included them in the bundle anyways...
Compared to windows 3.1, the default theme flew, and visually, it was way ahead of Win 98. I remember getting pseudo transparency working, but I think that was in the pentium 100Mhz days. Other than that, it checked all the other boxes on your list.
I'm all for a good hyperbole. Sure, there's plenty of unnecessary and bloated features in modern Windows.
Notably the memory leaking garbage news widget. But there's also plenty of useful features. Multiple desktops, search, multi window handling etc.
FVWM ran under 8MB almost as fast and it had far more features window-managing wise.
On "the same features".... no. Not even close. No Unicode, no CJK input with IBUS, no GVFS/Kio for thousands of protocols and devices (MTP), no memory protection...
I remember that when my father upgraded for a k6 II machine I adopted his old pentium 100mhz (overclocked from 90) with 40MB of ram and it was less than snappy in win98.
It was actually the reason I started using linux, to maximise the use of the resources. On win98 that machine was pretty much single task while on linux I could use command line and tui applications to play music, work on a school document with abiword or staroffice, chat with irc/ICQ/jabber while browsing with netscape/mozilla then phoenix/firebird. On windows 98 listening to mp3 on winamp audio was quickly stuttering if I was doing other stuff at the same time.
Funny this. 20+ years ago I had a similar (K6 II) machine w/Win98. I used one of those sketchy "IE Extractor" apps downloaded from who-knows-where to remove IE and replaced it with Mozilla. Performance increased noticeably; apps opened near-instantaneously upon clicking.
Since then, I've never had a PC/OS combo as fast as that. Need to work on that now...
Yeah but those were my formatting year. I remember a lot of people recommending latex which led me to start using lout[1] for a while which felt easier to learn and had a much smaller installation footprint.
Win98 was a turd. Even SE crashed a lot. No memory protection. No daemons to mount zillions of remote drives or USB devices with GVFS/Kio.
Guess why NT based OSes used far more memory.
>Firefox - SJW_crap = better,smaller,faster,securer. But SJW_crap became their primary objective so we're doomed...
IE+Active X+OLE+Windows 98 = malware haven. Firefox supports audio/video securely, and even WebGL support. And WASM. Yes, you could do that with Flash and a Pentium III with Street View, but your computer was an ad-ridden malware epidemic party.
You've chosen to miss the point, and you're also comparing sand to ice.
win98lite, was windows install modification for win98 to remove some stuff so you could play computer games more while you weren't defragging your hard drive.
My Ubuntu KDE installation uses ~700MB RAM at idle with a fresh login and I've been using it with Wayland for years now without the problems the author describes.
My Steam Deck KDE uses ~1.3GB, more than half of which is taken by the baloo_file_extractor process.
Apparently doing background indexing of file contents gobbles RAM. Not sure if that's the problem with the author's setup?
He's included screenshots of top. It appears he has akonadi installed which from my experience is a huge memory consumer. It's the first thing I uninstall on new builds.
Ah, I missed the zip file with the screenshots in. Yes, Akonadi loves RAM. I'm a bit confused as to why it's installed and running by default in this case - is this a Fedora-ism? I don't think Kubuntu or Arch install it by default, it's only installed as a dependency if you install elements of the KDEPIM suite.
FWIW, after baloo_file_extractor finished indexing new files on my Steam Deck, it exited, and KDE RAM usage was down to ~600MB.
Yeah, it's a bit old fashioned IMO. It comes with CD/DVD ripping software and an Office suite for example. It seems to be aimed towards Office users rather then home users. I always uninstall most of the default software.
Interesting, MATE should be under 400MB or less if you do not enable optional HUD or tray resident applications. Even in 64bit, it ran just fine on a pi2b with 512MB ram, and was about 340MB at boot-up with marco (no colord, HUD terminal, or evolution mail etc.).
The issue arises in how linux reuses ram pages for shared objects. While you may have many processes... the actual incremental program size can be very small. One of the many reasons oversized kernel ram cache/buffer sizes can help with performance (often as high as 40% on small flash memory io-constrained devices).
A heavily customized MATE on Ubuntu LTS OS is part of the standard developer platform I recommend. This takes a bit of effort, but there are benefits to having everyone on the exact same hardware, OS, and compiler version. i.e. cloning/repairing a workstation snapshot takes 30 minutes.
Fedora and its community derivatives can be problematic for developers... as the security-policies, application and driver-support can sidetrack projects.
Better for DNS/db/key servers, if you aren't allowed to use *BSD because "reasons".
Sometimes a hardened system is not ideal when you are trying to debug something complicated already. Something like alpine-linux is also intended for specific uses-cases.
I have Ubuntu 20.04 running MATE on a RPi 4 8GB and I have to say it is pretty damn sluggish. I need to dig under the hood more to lighten it up, because even with a quad core at 1.5ghz and 8gb ram... the window system and just general menus and movement feels slower than an old Win2K machine I had with only 64MB of ram at 266mhz. I had to enable the outline-of-window-only when moving or resizing because just moving a firefox window on the screen drops the FPS to a slideshow.
The out-of-the-box performance on the pi4 is a drag, OpenGL support has been silently crippled in many repo applications for compatibility reasons, and overclocking the sdcard is more limited than the pi2/pi3 (primary bottleneck). You may have to brute force the marco GPU flags to figure out which features are implemented, don't break video codecs, and don't conflict with kernel settings (v4l driver).
There are a dozen reasons things may be glitched, but usually it is the lack of active cooling causing the pi4 to thermal throttle. Also, mate-tweak tool can save time digging for settings.
Also, if it is the Ubuntu distro rather than Raspbian, than there is a lot of missing software to get things working well.
What on earth requires to eat 1.5 gig of RAM?
I'm writing this under KDE 22.08 and plasmashell took 700m RES.
KWin 180m RES. Why we were able to server the same windows in 90 with 64m RAM total, now window manager requires 180m. No composition is running.
Functionality basically the same, no user experience breakthrough. And yet.
Modern WMs and desktop shells use way more memory because they cache more bitmaps (we now render text as bitmaps and composite them, and the drawing styles/themes are now much more than flat gray fill with the odd non-antialiased line), and because the bitmaps are far larger due to much higher resolutions and bit depths.
A single uncompressed 1080p 32-bit (24-bit with 8-bit alpha) wallpaper is 66MB of RAM. A similar single 4k wallpaper takes 265MB RAM. All those fancy-looking high-res 32-bit icons also take huge amounts of RAM.
I'm an idiot, you're right. There, it's on record. I claim under-caffeination. Still, the basic point still stands, even if my numbers need to be divided by 8. Desktops use more memory because, as users, we demand fancy true-color graphics, high resolutions, and anti-aliasing that we didn't have in the Windows 9x era.
I would be fine with high memory usage if it was almost all shared memory, but what really annoys me is that it isn't. Also the lack of cross platform UI toolkits(not just Windows-Linux cross platform, but really such that they integrate with KDE, gnome, xfce, and anything else as well as Windows). Take just one example, system tray. There is no common interface for it. Same with many other things. Don't get me wrong, Linux in the desktop today is usable, but it leaves a lot be desired.
Systray is a wart, abused by too many applications. They should take the android approach, background tasks that cannot have UI and have to communicate with a frontend, which for a change cannot be left hanging in background.
In other words, if you want something in background, make it systemd user service, so the user can use standardized tools to enable/disable/start/stop it. None of that nonsense, where apps keep running after closing all of their windows and resisting to quit (like Skype does).
Caring about cpu/wifi temp/frequency/network traffic/battery level seems antiproductive. If the computer is slow 1 time a week, open something to investigate that.
Windows 10 mapped clipboard manager to win-v popup window, works well.
My hardware often works literally for over a decade. If being frugal and taking care of your devices is "antiproductive" for you, I'm a little bit concerned but it's up to you anyways.
Maybe you live in a 1st world country and earn $15K monthly and you can replace your laptop every month. Good. I can afford one every few years at best.
Win+V works here in XFCE as well only it shows over 35 recent entries as opposed to maybe 5 in Windows 10.
All my laptops are at least 3 years old, if not 7. My Nexus 7 is 9 years old, still use it every day.
Taking care of something that doesn't need taking care of seems to be a waste of time though. If the computer gets too hot, it'll slow down until it's not by itself.
I still use gkrellm, but I would much prefer it lived in the system tray, or if tray was gone on the right of the top panel. However, if having a common standard for a system tray is not happening it is even less likely for a common panel interface/API.
As for systemd services. I see tray's role not as a background task runner, but as a way to show tiny GUI elements in unobtrusively.
I used it briefly over two decades ago and I stopped for one single reason: it requires screen estate, it's an actual window. Meanwhile my task bar is visible at all times.
My desktop is meant for multitasking, I need a lot of stuff opened to keep track of important information, and the systray allows me to glance over it, and know from which program that important notice came from.
No, I don't want to use a dedicated worspace filled with programs. No, I don't want a notification icon that has to be opened so I keep track of things.
A system using systemd for managing which apps can run in the background can be managed by a task/tray-bar or whatever it's called, just as you expect it to. However, their lifecycle would be decided by yourself/the taskbar manager, rather than each application individually.
So in theory, you could expect all applications to work the same, and any application could be minimized to the taskbar or be run at startup, without the program having to offer that functionality in itself.
> In other words, if you want something in background, make it systemd user service, so the user can use standardized tools to enable/disable/start/stop it.
That would be sufficient if the "standardized tools" provided a similar UI and user feedback for those background tasks. Sure right now I can already have my file synchronization tool run in the background with a systemd service and manually use `journalctl` or `GNOME Logs` to monitor its state, but that's a far inferior user experience than simply having an icon in the corner of my screen which immediately tells me the state of the file synchronization and also allows me to quickly alter its settings.
So as long as those standard background tasks are so tedious to monitor and interact with, I'll keep using the "nonsense" status tray icons.
Most of these services do not want to be managed generically, they want to force themselves on the user. Some sync services need to keep their brand on the screen, after all. Specifically syncing services do not need to hang in notification area, when it is needed to monitor or reconfigure them, it is trivial to launch the controlling app (see syncthing).
Apps like vlc or remmina should bother with systray support at all (they both do, and have it enabled by default).
> Specifically syncing services do not need to hang in notification area, when it is needed to monitor or reconfigure them, it is trivial to launch the controlling app (see syncthing).
I want them there, so that's more than enough reason why they need to be there on my systems. No one forces you to have them on your system.
And this has nothing to do with branding nor is it useless noise, because the icon provides valuable information to me. I don't want to have to use `http://localhost:8384/` or `journalctl` every time I want to know:
* Are there some issues which prevent syncing?
* Is a sync in progress (which means I might not want to suspend/poweroff my machine just now)?
* Is syncing paused or idle?
By your logic we should also get rid of status elements like the clock, network, volume, ... because all those information can be monitored and configured with in their respective applications and system settings.
I understand the desire for cross platform toolkits, but I've accepted that there are no good library solutions, only architectural solutions and in the end that's probably a good thing. By that I mean writing to a library built on top of native APIs will never be as good as fully separating your application's UI from the core engine (with games being a giant exception to the rule) and writing a new UI for each platform.
I think cross platform toolkits like Qt, GTK, Electron/web, Swing, etc... are really their own platforms. There are definitely places where these things make sense (mostly in business or enterprise software that nobody expects to be good anyway), but it's rare that something really loved is built with them.
It's the classic thinking that we can solve the multiplicity of platforms "problem" by writing one more platform except, this time, it's a meta-platform.
It's not a great comparison, the environments seem to be mixed together rather than installed clean / separately. For example xfce lists mate-screensaver and kalendarc - I don't think you'd see that on a pure-xfce system.
It's not just installed, it actually got started based on your process lists. I don't know how bsd and xfce specifically work in this case, but some educated guesses:
- depending on whether you restarted between tests or not, DE shutdown may not clean up all services
- xfce may have a list of supported service alternatives and mate-screensaver was started as the expected-preferred one
- some services may be registered to be stated together with the GUI through the .desktop files, which may explain kalendarc
It's a bit a unfair comparison because you have include the ZFS-Arc AND Inact ("buffer" in the other test). Also tested under virtualbox = less drivers, buffer flushed, pipewire linux, pulse-audio freebsd? and so on...it's hard to compare those two test in this way.
A surprising thing is that the gap between XFCE and MATE isn't large. Maybe it's because of the modern GNOME 3's initial infamy, but I kind of assumed that MATE would be a bit more of a resource hog, but I am very happy to see that it isn't!
Considering we have many dozens of distros and ten of thousands of their permutations (different "spins", desktop environments, etc.) this test is always going to be distro specific.
I chose Fedora because of its freshness, proximity to the source (Fedora prefers to apply as few patches as possible vs. e.g. Debian/Ubuntu) and readiness. You just install it and start working.
Fedora also bundles a bunch of software by default for a nice desktop experience for new users that you wouldn’t get with only Gnome on Arch. The difference is quite big, I’ve used both and Arch is way leaner even with a full install of the Gnome desktop.
True so yet all of them have the same/similar background services/daemons/applications, so the comparison is still valid though it will be distro-specific.
I cannot physically test all the distros and their permutations, and then there are some user-defined ones such as Gentoo or LFS.
I would like to see a similar comparison, but for the latency of completing various actions, like startup time, switching windows, opening the launcher menu, navigating to a subdirectory...
These are the things that matter to me - much moreso than RAM usage. My informal anecdotal experience is that XFCE latency is nearly always imperceptible, whereas GNOME latency is nearly always perceptible.
Thanks. For those who are interested you can download the source images at the bottom of the article. People have been questioning applications/services/daemons running in background for each DE - you can check that.
One thing that was missing for me was how you checked RAM usage, since a similar previous comparison that was posted on HN actually measured something different than what most people expected.
Since you helpfully posted the source images I checked to see what you did :)
I saw that you used the output of "free", which I think is not what most people think of as RAM usage (IIUC it reports the total virtual memory in use). I usually look at the output of `smem -atk` and look at total PSS. See https://lwn.net/Articles/230975/ for a clearer explanation on that.
I completely flushed system buffers prior to getting the results (echo 3 > /proc/sys/vm/drop_caches), so the free RAM that I posted is indeed absolutely free yet it cannot be reused.
While memory usage is not a perfect metric of how "bloated" a piece of software is, I would't say it's useless either. That memory isn't just sitting there, it's being read, written, moved, meaning the software uses it to do things which also take up CPU and other resources. There is no escaping that doing more stuff in the background means less prioritization for your active usage so the software feels (and is) slow to respond.
Now if that background/automated stuff is useful to you then that's that, you can't have something for nothing. But if you don't care for most of it I don't see why you would keep it around and not benefit from a snappy system. If you've never tried using a window manager instead of Desktop Environment give it a try, it's extremely simple and you can have both in parallel. See if you miss anything from DEs, I for one don't.
I still use LXDE, the predecessor to LXQT, on my daily driver: a Thinkpad T420 (~12 years old) with 8GB RAM.
Low resource usage is the main reason I use LXDE. I often need to run several (4+) VMs, each of which can eat up to 2GB RAM. (These are headless VMs running specialist networking software). When running all of them I need to be really, really careful not to use up all my RAM because if I do, my system just freezes. Everything is stuck - cursor, clock, applications, etc. At this point my only choice is to reboot.
I don't know if this is due to a poorly-configured system (isn't swap supposed to prevent this?), but it certainly makes me much, more aware of memory usage.
It's also nice to be able to use the same base system (Debian + LXDE) on my modern T420 as well as a Pentium 3 laptop with 128MB RAM. I don't think I'd be able to say the same with KDE or Gnome!
That's interesting! I have a thinkpad t420 that's very likely similkar age to yours, and also ran into the issue you described....but my issue manifested while running Linux Mint 19.something....At which point i had been fed up with Mint (for other reasons, so this was the final straw)...so tried - an loved! - Lubuntu. That machine runs beautifully on Lubuntu 20.04.1 LTS (since when this version came out in 2020). In fact, after moving away from Mint - and gotten other machines - i was going to convert this old thinkpad into a terminal-only sort of server (you know, headless, baby)...But Lubuntu - well specifically LXQT - runs so damn fast and smooth on it, that i kept the desktop environment! I've since added another 8GB, so this machine rockets all things i throw at it with its 16GB...and i credit Lubuntu, specifically LXQT for that blazing speed and rock solid stability (yes, yes, i know the stability is ubuntu, and yes, yes debian all underneath)...My point being: as nice as KDE, Gnome, etc. might be, and as low resource hungry as DEs like XFCE, folks should not dismiss LXQT. It might be closer to feature set to XFCE, but wow, does it run lightweight, fast, and solid!
I clicked through eager to see LXDE curbstomp all comers but alas it was not included in the results.
I use LXDE because it responds when I click its widgets. Every other desktop environment, by comparison, has a noticeable lag when responding to user input- despite using more memory. Wassup widdat?
LXQt (uses Qt) has replaced LXDE (uses GTK3), and is in the results. LXDE's latest release was Feb 2021, while LXQt's was a month ago. The creator of LXDE, Hong Jen Yee, has abandoned it for LXQt because he was dissatisfied with GTK3. I have to wonder how much more work will get done on LXDE.
> LXDE's latest release was Feb 2021, while LXQt's was a month ago. The creator of LXDE, Hong Jen Yee, has abandoned it
I'm aware LXDE's creator has abandoned it but LXQt is laggy compared to LXDE, which does everything I need it to, so I will continue to use the latter.
I am also fine with a lack of releases since for my purposes LXDE is complete and I find alternatives that receive more regular releases inferior.
Future development work on LXDE seems likely to occur in the GTK3 port.
Although again, I don't consider LXDE to require much development since it is ideal as-is. Once the wheel is round there is no need to add corners just to say it is being developed.
On SSDs heavy swapping is still somewhat bearable. But I assume on your system you only have spinning disks, so your only option is to reboot, or to leave the system alone for a while until it recovers. Unless you want to go in an configure a more aggressive process killer in case of OOM (and also zram/zswap could help if you or your distro haven't configured those yet in the base installation).
Plus a bunch of services below that, which at this point is only noise. If palemoon works for you as a browser, that will only use 320 MB RSS empty on startup.
Machine has 64 GB RAM, no idea if Chromium by design would eat less on less beefier machines. At 4 GB RAM, zRAM is certainly an option, if you can't or don't want to upgrade. Without such tweaking, modern Linux desktops profit from 8 GB RAM. No matter what WM or DE you run. Once you start a web browser, game over. ;-)
Firefox may work a bit better when it comes to memory usage, especially because about:memory allows manual garbage collection calls if memory usage gets too high.
I've run sway on most of my home hardware for 2-3 years which ranges from i7 4 core to Ryzen 4700 8 core with 16 GB RAM and all of my post-boot measurements hover in the 512-640 MB range.
I'm usually on arch but these numbers seem consistent from debian to fedora to arch. Sway deserves credit for managing IPC latency like a cloud provider manages RPC latency.
I miss the days when personal computers were near zero latency but sway is as good as it gets in that regard, which is why I'm a fan. There's a lot to learn from the sway design.
To the author. The wording you use in the text may get you banned from your source of income, as it’s against their terms to do that. Not mentioning any brand names, as I’m paranoid that might trigger some tag based thing somewhere and lead to the ban.
There are still ways to get donations to Russia. Look at what popular opposition Telegram channels do regarding this. Crypto-hate notwithstanding, this would be a legitimate use case.
Isn't the point of having RAM to fill it in order to cache objects? To me measuring how much RAM a DE uses is strange because I would expect it to consume enough RAM to be snappy but be able to adopt in a constrained system. If however I have 8G free, use it away, I don't see the point in having RAM I paid for sitting around unused.
That's the point at a holistic level, but in reality it's a shared resource. A DE in particular is never the primary application - it should spend most of its time largely invisible to the user - so using a lot of RAM that actual applications may require would not be my expectation.
> be able to adopt in a constrained system
That's a fair caveat, but in reality no software is perfect & there's often a trade-off here. Being conservative with RAM usage in general (instead of trying to be both greedy & adaptive) is a much more prudent approach. The latter is ideal, but hard to achieve perfectly. Working on being reasonably good at both is going to give you the best quality outcomes.
> I have 8G free, use it away, I don't see the point in having RAM I paid for sitting around unused
That's a nonsensical sentiment. What's the point in using RAM "you paid for" for something useless/wasteful - you're not getting your money's worth whether the RAM is idle or wasted.
So at least for Gnome - the general strategy they appear to be aiming for is "Use it if you've got it, give it back if you're running low".
Which - is entirely reasonable, in my opinion.
> That's a nonsensical sentiment. What's the point in using RAM "you paid for" for something useless/wasteful - you're not getting your money's worth whether the RAM is idle or wasted.
It's not being "wasted" it's resulting in faster IO for anything that happens to be in it. Given the choice of having your RAM full or empty... you should pick full every time. You just need to make sure that "full" can be adjusted to make space for whatever application you might want to open next.
So... swinging back around to Gnome - as far as I can tell, they're trying to do exactly that (full disclosure, I can't speak for how well this is adopted).
> "Use it if you've got it, give it back if you're running low".
> You just need to make sure that "full" can be adjusted to make space for whatever application you might want to open next.
> full disclosure, I can't speak for how well this is adopted
This is the issue I'm highlighting. This is what everyone tries - engineers are one of the most vulnerable breeds to overestimating their ability to achieve an ideal plan. It rarely if ever works perfectly, and very often works sub-optimally to the point of significantly hampering overall system performance.
I don't know that Gnome abjectly fails in this goal, but I do know that it's not fast. It's significantly slower than other projects with comparatively much less engineering resources behind them. For some reason.
> It's not being "wasted" it's resulting in faster IO for anything that happens to be in it. Given the choice of having your RAM full or empty... you should pick full every time.
This is absolutely wrong, because you're basing this all on the assumption that if it's not in RAM it'll be loaded from disk. In other words, the assumption that "it" needs to exist in the first place (&/or needs to be the size that it is). This is really disingenuous: when people talk about RAM-hungry apps and wastage they're never targeting optimistic IO (good) they're targeting large unnecessary data being accessed at all.
Good low-RAM apps don't load excessively from disk. They use less data. They have more efficient raw resource handling & execution paths in the first place.
An emphasis on engineering systems that effectively & efficiently release RAM when needed elsewhere focuses person-hours on improving (very complex) pieces of logic that wouldn't be as necessary if said person-hours were instead focused on making those resources being loaded smaller / less numerous.
And generally they do that by offering fewer features (and yes - aesthetics and ergonomics are features!).
By the time I'm installing Gnome/KDE... I've already picked a heavy DE - I want it to be extensible (js/css in gnome is pretty great), pretty, and fast on a modern machine. I'm not picking those DEs for a constrained environments.
In those envs - I may well just skip the DE although, or go with something lighter like Sway.
But when I have 32gb of RAM... I really would prefer the feature rich DEs continue to focus on features when the usage is ~6% of RAM (2/32's). The RAM is, all things considered, outrageously cheap compared to their dev team's time.
When I want to conserve RAM... I'll pick a different tool.
> And generally they do that by offering fewer features (and yes - aesthetics and ergonomics are features!).
XFCE has more features than modern Gnome. Aesthetics comes from GTK, which it uses. Ergonomics is better in Gnome, largely due to consistency, which I'd attribute to Gnome's development effort: more people to work on more coordinated behaviours across apps & integrations - XFCE borrows small parts of Gnome's work while also using a patchwork of other solutions from elsewhere; an approach which detracts from the UX. This is about engineer resourcing though, not engineering approach to performance. The latter is an independent variable.
There's nothing "heavy" about Gnome aesthetically of feature-wise except the general inefficiency of its implementations. Especially feature-wise in its latest incarnations: ever since the Unity/Shell split &c., the Gnome featureset as contracted significantly from what it once was.
> I want it to be extensible (js/css in gnome is pretty great)
What's great about it specifically? It's unrealised potential? Gnome's extensibility is a graveyard. The pace of API changes (another code smell when it comes to engineering approach), compared to the tiny level of community contribution of Shell extensions means most of those available for download are broken. Those that work are minimal / have inconsistent UX / no docs. If you've got to write your own code to customise the thing anyway, why not just use a more esoteric highly-customisable DE in the first place?
You know it's ok to just say you don't like Gnome. I'm not going to judge.
I think the 2gb it eats are essentially trivial costs to pay compared to the functionality you get (easily comparable to modern windows/macOS) given the costs of those alternatives.
I'm on my work mac right now, and it's using 26gb of ram right now (mostly dev related) - but at least 4gb of which is for non-application usage... to count:
windowServer is 1.3
softwareupdated is 1.2
softwareUpdateNotificationManager/Finder/systemSoundserverd are another 1
misc services are about .72
Basically - you can complain about how inefficient Gnome is all day - I'm telling you that it DOES NOT CHANGE THE VALUE PROPOSITION.
It's plenty efficient for what it does. Please use the RAM - keep focused on features.
If the desktop environment is consuming all the RAM to eek out a bit of extra performance, then it's not available for the applications to do the same.
Ok, so to be precise I did not mean to imply a DE should be a resource hog, leak memory all over the place etc. I simply meant to say that a diff of few tens or even hundreds of MBs of RAM between DEs is realistically insignificant for most people.
I used to chase down xfce, lxde, openbox,etc to find the best light weight WM for me, problem nowadays is that, I need chrome on all of them, whose memory usage alone overshadows all those windows managers.
unless chrome can somehow run in, say, 200MB of RAM, the windows manager are no longer that important to me on the resource consumption side.
I'm running fluxbox, I recently switched from xfce (which I've been using for years and was my go-to). I have to say I'm very impressed, simple and easy to use, certainly MX Linux's set up is pretty flawless. Like a halfway house between traditional and tiled window manager.
Worth remembering this is the usage of a default setup (+ some indexing it seems). It's quite easy to drop a lot from that usage if you don't need akonadi, baloo, and a few other things.
LXDE is missing... I quite like it, I started using it when I was looking for the lightest DE that I could find, but now that I see IceWM and how light it is, I have to try it!
(I started with XFCE which I loved).
The only problem that I encounter is that my display is 1024x600 max resolution, so with some DEs the windows would be so big that they couldn't fit the monitor, even if I try to make them smaller...
If anyone has any suggestions for an even lighter one, please share.
Still supported by Debian and Fedora (as a separate spin). You should also be able to add it as an alternate session in Ubuntu and its derivatives, though it's not directly supported at install.
In practice though, you may find that installing some Xfce apps gives you similar performance to the LXDE originals with more up-to-date features and bug fixes. (This will be especially true as GTK+2 loses distro support over time - there may be no benefit to running LXDE-gtk+3 over Xfce.)
I found Gnome to offer the best mix of stable tiling and normal UX while supporting both wayland and x11 with good enough customization options through its extensions, so it's an easier to set-up solution for notebooks to which often connect a mouse.
> Post installation the system will be fully upgraded and rebooted
While most users will use OS like this, the results become not easily reproducible, as new updates sometimes improve or worsen performance. Why not take release image and test without updates?
What is the point of VM because everyone has different environment when everyone would get different performance on VM depending on the environment? Should just compare on the identical machine and that does just good job of comparison, duh
The sad part was the request for clicking google ads as a form of patreon support. I would happily get an advertiser to pay a few cents per click but I could not see any ads.
It just brings home the impact of sanctions - Putin does not need patreon support but someone actually making a small contribution to understanding is rather stuck.
I've run this website for a over a decade and looks like donations are just not a thing. People happily click Google ads though so at least there's something. Something which garners around $50 a ... year.
This is just random comments on the internet so take with large pinch of salt
Your site seems to provide a interesting set of views and importantly a evidence / research based set of views. I can imagine there being 50 questions (which DE uses most memory in VM, which config settings can minimise windows footprint) - essentially best practise settings for mid sized installations.
Monetising that might be tricky, but building an audience of people who have that problem set is building an audience of people with problems and money.
Personally I am just revamping my personal docker workstation and just use XFCE because ... I might have a rethink thanks to you.
I couldnt find a donation link/btc address/anything on the site, and I looked :/ might not be such a bad idea! you can always ask for btc donations, or something, at least.
The very top of your page is a big ad - if that was an inoffensive "support the author and this work", it might get you more than that top ad does
To punish and limit people without the resources to circumvent sanctions effectively? I hope not - it wouldnt make me motivated to fight my own government, if it tells me that all the sanctions are proof the government is doing something neccessary.
It seems to me like sanctions miss the point of what propaganda on the affected side can turn it into.
> Sawfish is an extensible window manager using a Lisp-based scripting language. Its policy is very minimal compared to most window managers. Its aim is simply to manage windows in the most flexible and attractive manner possible. All high-level WM functions are implemented in Lisp for future extensibility or redefinition.
Well, for me it seems to have tons and tons of... Lisp/Perl dependencies? I don't exactly remember what they were, but when I installed it on a bog-standard Ubuntu Server box, the list of dependencies filled my entire terminal.
Most of Emacs is written in Lisp. So calling Lisp a "dependency" just reveals that you dont have much of an idea what you are talking about. Thats OK, I ramble along sometimes as well...
As said above, most of the dependencies you saw from Apt in Ubuntu were libraries from the GTK/X11/GUI ecosystem.
I used to run my Emacs as PID 1, so frankly, the dependency problem isnt really a problem...
IF you find a way to reproduce this issue on another device or using any online service (I've tried two and both said it looked perfect), I'll check it out. Actually you're the only user to report it.
Desktop linux is not being held back by system usage. If anything, we need to stop caring about that for a while and focus on quality of life / ergonomics. A unified clipboard (kb shortcuts as well as shared buffer), proper 4k rendering/scaling at a nuclear level, etc. This is my biggest gripe with Linux.
I would ditch macOS entirely if I could do the following: 1. complete keyboard remapping such that macos keybindings work everywhere (there are hacks for this, but they are hacks), 2. proper 4k rendering (no blurry stuff) with consistent titlebars and ui elements across all applications, 3. the ability to copy and paste text into or out of a terminal without fuss.
I don't care about X11 vs Wayland.
All this considered - and I can't believe I am about to say this - Windows 11 is not nearly as bad as I thought it would be and I kinda like it. Might need to go see a doctor I am starting to say funny things.