Hacker Newsnew | past | comments | ask | show | jobs | submit | jchw's commentslogin

Containers are just processes plus some namespacing, nothing really stops you from running very huge tasks on Kubernetes nodes. I think the argument for containers and Kubernetes is pretty good owing to their operational advantages (OCI images for distributing software, distributed cron jobs in Kubernetes, observability tools like Falco, and so forth).

So I totally understand why people preemptively choose Kubernetes before they are scaling to the point where having a distributed scheduler is strictly necessary. Hadoop, on the other hand, you're definitely paying a large upfront cost for scalability you very much might not need.


Time to market and operational costs are much higher on kubernetes and containers from many years of actual experience. This is both in production and in development. It’s usually a bad engineering decision. If you’re doing a lift and shift, it’s definitely bad. If you’re starting greenfield it makes sense to pick technology stacks that don’t incur this crap.

It only makes sense if you’re managing large amounts of large siloed bits of kit. I’ve not seen this other than at unnamed big tech companies.

99.9% of people are just burning money for a fashion show where everyone is wearing clown suits because someone said clown suits are good.


Writing software that works containerized isn't that bad. A lot of the time, ensuring cross platform support for Linux is enough. And docker is pretty easy to use. Images can be spun up easily, and the orchestration of compose is simple but quite powerful. I'd argue that in some cases, it can speed up development by offering a standardized environment that can be brought up with a few commands.

Kubernetes, on the other hand, seems to bog everything down. It's quite capable and works well once it's going, but getting there is an endeavor, and any problem is buried under mountains of templatized YAML.


This, 100%.

Imagine working an a project for the first time, having a Dockerfile that works or compose file, that just downloads and spins up all dependencies and builds the project succesfully. Usually that just works and you get up and running within 30 minutes or so.

On the other hand, how it used to be: having to install the right versions of, for example redis, postgres, nginx, and whatever unholy mess of build tools is required for this particular hairball, hoping it works on you particular (version) of linux. Have fun with that.

Working on multiple projects, over a longer period of time, with different people, is so much easier when setup is just 'docker compose up -d' versus spending hours or days debugging the idiosyncrasies of a particular cocktail that you need to get going.


Thanks. You’ve reassured me that I’m not going mad when I look at our project repo and seriously consider binning the Dockerfile and deploying direct to Ubuntu.

The project is a Ruby on Rails app that talks to PostreSQL and a handful of third party services. It just seems unnecessary to include the complexity of containers.


I have a lot of years of actual experience. Maybe not as much as you, but a good 12 years in the industry (including 3 at Google, and Google doesn't use Docker, it probably wouldn't be effective enough) and a lot more as a hobbyist.

I just don't agree. I don't find Docker too complicated to get started with at all. A lot of my projects have very simple Dockerfiles. For example, here is a Dockerfile I have for a project that has a Node.JS frontend and a Go backend:

    FROM node:alpine AS npmbuild
    WORKDIR /app
    COPY package.json package-lock.json .
    RUN npm ci
    COPY . .
    RUN npm run build

    FROM golang:1.25-alpine AS gobuilder
    WORKDIR /app
    COPY go.mod go.sum .
    RUN go mod download
    COPY . .
    COPY --from=npmbuild /app/dist /app/dist
    RUN go build -o /server ./cmd/server
    
    FROM scratch
    COPY --from=gobuilder /server /server
    ENTRYPOINT ["/server"]
It is a glorified shell script that produces an OCI image with just a single binary. There's a bit of boilerplate but it's nothing out of the ordinary in my opinion. It gives you something you can push to an OCI registry and deploy basically anywhere that can run Docker or Podman, whether it's a Kubernetes cluster in GCP, a bare metal machine with systemd and podman, a NAS running Synology DSM or TrueNAS or similar, or even a Raspberry Pi if you build for aarch64. All of the configuration can be passed via environment variables or if you want, additional command line arguments, since starting a container very much is just like starting a process (because it is.)

But of course, for development you want to be able to iterate rapidly, and don't want to be dealing with a bunch of Docker build BS for that. I agree with this. However, the utility of Docker doesn't really stop at building for production either. Thanks to the utility of OCI images, it's also pretty good for setting up dev environment boilerplate. Here's a docker-compose file for the same project:

    services:
      ui:
        image: node:alpine
        ports: ["5173:5173"]
        working_dir: /app
        volumes: [".:/app:ro", "node_modules:/app/node_modules"]
        command: ["/bin/sh", "-c", "npm ci && npx vite --host 0.0.0.0 --port 5173"]
      server:
        image: cosmtrek/air:v1.60.0
        ports: ["8080:8080"]
        working_dir: /app
        volumes: [".:/app:ro"]
        depends_on: ["postgres"]
      postgres:
        image: postgres:16-alpine
        ports: ["5432:5432"]
        volumes: ["postgres_data:/var/lib/postgresql/data"]
    volumes:
      node_modules:
      postgres_data:
And if your application is built from the ground up to handle these environments well, which doesn't take a whole lot (basically, just needs to be able to handle configuration from the environment, and to make things a little neater it can have defaults that work well for development), this provides a one-command, auto-reloading development environment whose only dependency is having Docker or Podman installed. `docker compose up` gives you a full local development environment.

I'm omitting a bit of more advanced topics but these are lightly modified real Docker manifests mainly just reformatted to fewer lines for HN.

I adopted Kubernetes pretty early on. I felt like it was a much better abstraction to use for scheduling compute resources than cloud VMs, and it was how I introduced infrastructure-as-code to one of the first places I ever worked.

I'm less than thrilled about how complex Kubernetes can be, once you start digging into stuff like Helm and ArgoCD and even more, but in general it's an incredible asset that can take a lot of grunt work out of deployment while providing quite a bit of utility on top.


Is there a book like Docker: The Good Parts that would build a thorough understanding of the basics before throwing dozens of ecosystem brand words at you? How does virtualisation not incur an overhead? How do CPU- and GPU-bound tasks work?

> How does virtualisation not incur an overhead?

I think the key thing here is the difference between OS virtualization and hardware virtualization. When you run a virtual machine, you are doing hardware virtualization, as in the hypervisor is creating a fake devices like a fake SSD which your virtual machine's kernel then speaks to the fake SSD with the NVMe protocol like it was a real physical SSD. Then those NVMe instructions are translated by the hypervisor into changes to a file on your real filesystem, so your real/host kernel then speaks NVMe again to your real SSD. That is where the virtualization overhead comes in (along with having to run that 2nd kernel). This is somewhat helped by using virtio devices or PCIe pass-through but it is still significant overhead compared to OS virtualization.

When you run docker/kubernetes/FreeBSD jails/solaris zones/systemd nspawn/lxc you are doing OS virtualization. In that situation, your containerized programs talk to your real kernel and access your real hardware the same way any other program would. The only difference is your process has a flag that identifies which "container" it is in, and that flag instructs the kernel to only show/allow certain things. For example "when listing network devices, only show this tap device" and "when reading the filesystem, only read from this chroot". You're not running a 2nd kernel. You don't have to allocate spare ram to that kernel. You aren't creating fake hardware, and therefore you don't have to speak to the fake hardware with the protocols it expects. It's just a completely normal process like any other program running on your computer, but with a flag.


Docker is just Linux processes running directly on the host as all other processes do. There is no virtualization at all.

The major difference is that a typical process running under Docker or Podman:

- Is unshared from the mount, net, PID, etc. namespaces, so they have their own mount points, network interfaces, and PID numbers (i.e. they have their own PID 1.)

- Has a different root mount point.

- May have resource limits set with cgroups.

(And of course, those are all things you can also just do manually, like with `bwrap`.)

There is a bit more, but well, not much. A Docker process is just a Linux process.

So how does accessing the GPU work? Well sometimes there are some more advanced abstractions for the benefit of I presume stronger isolation, but generally you can just mount in the necessary device nodes and use the GPU directly, because it's a normal Linux process. This is generally what I do.


About 25 years here and 10 years embedded / EE before that.

The problem is that containers are made of images and those and kubernetes are incredibly stateful. They need to be stored. They need to be reachable. They need maintenance. And the control responsibility is inverted. You end up with a few problems which I think are not tenable.

Firstly, the state. Neither docker itself or etcd behind Kubernetes are particularly good at maintaining state consistently. Anyone who runs a large kubernetes cluster will know that once it's full of state, rebuilding it consistently in a DR scenario is HORRIBLE. It is not just a case of rolling in all your services. There's a lot of state like storage classes, roles, secrets etc which nothing works if you don't have in there. Unless you have a second cluster you can tear down and rebuild regularly, you have no idea if you can survive a control plane failure (we have had one of those as well).

Secondly, reachability. The container engine and kubernetes require the ability to reach out and get images. This is such a fucking awful idea from a security and reliability perspective it's unreal. I don't know how people even accept this. Typically your kubernetes cluster or container engine has the ability to just pull any old shit off docker hub. That also couples to you that service being up, available and not subject to the whims of whatever vendor figures they don't want to do their job any more (broadcom for example). To get around this you end up having to cache images which means more infrastructure to maintain. There is of course a whole secondary market for that...

Thirdly, maintainance. We have about 220 separate services. When there's a CVE, you have to rebuild, test and deploy ALL those containers. We can't just update an OS package and bounce services or push a new service binary out and roll it. It's a nightmare. It can take a month to get through this and believe me we have all the funky CD stuff.

And as mentioned above, control is inverted. I think it's utterly stupid on this basis that your container engine or cluster pulls containers in. When you deploy, the relationship should be a push because you can control that and mandate all of the above at once.

In the attempt to solve problems, we created worse ones. And no one is really happy.


I get your points but I'm not sure I agree. Kubernetes is a different kind of difficulty but I don't think its so different from handling VM fleets.

You can have 220 vms instead and need to update all of them too. They also are full of state and you will need some kind of automatic deployment (like ansible) to make it bearable, just like your k8s cluster. If you don't configure the network egress firewall, they can also both pull whatever images/binaries from docker hub/internet.

> To get around this you end up having to cache images which means more infrastructure to maintain

If you're not doing this for your VMs packages and your code packages, you have the same problem anyway.

> When there's a CVE

If there is a CVE in your code, you have to build all you binaries anyway. If it's in the system packages, you have to update all your VMs. Arguably, updating a single container and making a rolling deployment is faster than updating x VMs. In my experience updating VMs was harder and more error prone than updating a service description to bump a container version (you don't just update a few packages, sometimes you need to go from Centos 5 to Centos 7/8 or something and it also takes weeks to test and validate).


I mostly agree with you, with the exception that VMs are fully isolated from one another (modulo sharing a hypervisor), which is both good and bad.

If your K8s cluster (or etcd) shits the bed, everything dies. The equivalent to that for VMs is the hypervisor dying, but IME it’s far more likely that K8s or etcd has an issue than a hypervisor. If nothing else, the latter as a general rule is much older, much more mature, and has had more time to work out bugs.

As to updating VMs, again IME, typically you’d generate machine images with something like Packer + Ansible, and then roll them out with some other automation. Once that infrastructure is built, it’s quite easy, but there are far more examples today of doing this with K8s, so it’s likely easier to do that if you’re just starting out.


> If your K8s cluster (or etcd) shits the bed, everything dies.

When etcd and/or kubelet shits the bed, it shouldn't do anything other than halt scheduling tasks. The actual runtime might vary between setups, but typically containerd is the one actually handling the individual pod processes.

Of course, you can also run Kubernetes pods in a VM if you want to, there have always been a few different options for this. I think right now the leading option is Kata Containers.

Does using Kata Containers improve isolation? Very likely: you have an entire guest kernel for each domain. Of course, the entire isolation domain is subject to hardware bugs, but I think people do generally regard hardware security boundaries somewhat higher than Linux kernel security boundaries.

But, does using Kata Containers improve reliability? I'd bet not, no. In theory it would help mitigate reliability issues caused by kernel bugs, but frankly that's a bit contrived as most of us never or extremely infrequently experience the type of bug that mitigates. In practice, what happens is that the point of failure switches from being a container runtime like containerd to a VMM like qemu or Firecracker.

> The equivalent to that for VMs is the hypervisor dying, but IME it’s far more likely that K8s or etcd has an issue than a hypervisor. If nothing else, the latter as a general rule is much older, much more mature, and has had more time to work out bugs.

The way I see it, mature code is less likely to have surprise showstopper bugs. However, if we're talking qemu + KVM, that's a code base that is also rather old, old enough that it comes from a very different time and place for security practices. I'm not saying qemu is bad, obviously it isn't, but I do believe that many working in high-assurance environments have decided that qemu's age and attack surface is large enough to have become a liability, hence why Firecracker and Cloud Hypervisor exist.

I think the main advantage of a VMM remains the isolation of having an entire separate guest kernel. Though, you don't need an entire Linux VM with complete PC emulation to get that; micro VMs with minimal PC emulation (mostly paravirtualization) will suffice, or possibly even something entirely different, like the way gVisor is a VMM but the "guest kernel" is entirely userland and entirely memory safe.


I think his point is that instead of hundreds of containers, you can just have a small handful of massive servers and let the multitasking OS deal with it

Containers are too low-level. What we need is a high-level batch job DSL, where you specify the inputs and the computation graph to perform on those inputs, as well as some upper limits on the resources to use, and a scheduler will evaluate the data size and decide how to scale it. In many cases that means it will run everything on a single node, but in any case data devs shouldn't be tasked with making things run in parallel because the vast majority aren't capable and they end up with very bad choices.

And by the way, what I just described is a framework that Google has internally, named Flume. 10+ years ago they had already noticed that devs aren't capable of using Map/Reduce effectively because tuning the parallelism was beyond most people's abilities, so they came up with something much more high-level. Hadoop is still a Map/Reduce clone, thus destined to fail at useability.


Books are not designed to reproduce colors though, and monitors are. If you have aggressive auto-brightness settings, that wouldn't actually make a monitor appear more like a book, it would just make it so the stuff that is actually supposed to look blisteringly white is merely mild. Which, sure, is an improvement for eye strain, but it's more of a workaround than a solution, and since it would muck up color reproduction a lot of users couldn't do this all the time anyways.

I don't really identify with any particular movement, but it's important to note that there are plenty of people who legitimately oppose the core concept of rationalism, the idea that reason should be held above other approaches to knowledge, this being put aside from other criticisms leveled at the group of people that call themselves rationalists. Apparently, rationalism isn't obviously correct. Unfortunately, I don't really have enough of a background in philosophy to really understand how this follows, but looking at how the world actually works, I don't struggle to believe that most people (certainly many decision makers) don't actually regard rationality as highly as other things, like tradition.

Rationalism in philosophy is generally contrasted with empiricism. I would say you're a little off in characterizing anti-rationalism as holding rationality per se in low regard. To put it very briefly: the Ancient Greeks set the agenda for Western philosophy, for the most part: what is truth? What is real? What is good and virtuous? Plato and his teacher/character Socrates are the archetype rationalists, who believed that these questions were best answered through careful reasoning. Think of Plato's allegory of the cave: the world of appearances and of common sense is illusory, degenerate, ephemeral. Pure reason, as done by philosophers, was a means of transcendent insight into these questions.

"Empiricism" is a term for philosophical movements (epitomized in early modern British Empiricists like Hume) that emphasized that truths are learned not by reasoning, but by learning from experience. So the matter is not "is rationality good?" but more: what is rationality or reason operating upon? Sense experiences? Or purely _a priori_, conceptual, or formal structures? The uncharitable gloss on rationalism is that rationalists hold that every substantive philosophical question can be answered while sitting in your armchair and thinking really hard.


You're (understandably) confusing rationalism the philosophy from the Enlightenment with the unrelated modern rationalist community.

For what it's worth, the modern rationalists are pro-empiricism with Yudkowsky including it as one of the 12 core virtues of rationality.


Oh! :) I saw "philosophy" and "rationalism" in the same paragraph and went into auto-pilot I suppose.

It's pretty unfortunate that the Yudkowsky-and-LessWrong crowd picked a term that traditionally meant something so different. This has been confusing people since at least 2011.

Well empiricists think knowledge exists in the environment and is absorbed directly through the eyes and ears without interpretation, if we're being uncharitable.

Sure. The idea of raw, uninterpreted "sense data" that the empiricists worked with (well into the 20th century) is pretty clearly bunk. Much of philosophy took a turn towards anti-foundationalism, and rationalism and empiricism are, at least classically, notions of the "foundations" of knowledge. I mean, this is philosophy, it's all pretty ridiculous.

> Apparently, rationalism isn't obviously correct. Unfortunately, I don't really have enough of a background in philosophy to really understand how this follows, but looking at how the world actually works, I don't struggle to believe that most people (certainly many decision makers) don't actually regard rationality as highly as other things, like tradition.

Other areas of human experience reveal the limits of rationality. In romantic love, for example, reason and rationality are rarely pathways to what is "obviously correct".

Rationality is one mode of human experience among many and has value in some areas more than others.


Seeing the outcomes of romantic love makes me think it should never be used as an example of correctness in any way.

there are two facets to "is rationalism good".

one is, "is there a rational description of the universe, the world, humanity, etc.". Some people think there isn't, but I would like to think that the universe does conform to some rational system.

the other, and important one is, "do humans have the capability to acquire and fully model this rational system in their own minds" and I don't think that's a given. the human brain is just an artifact of an evolutionary system that only implies that its owners can survive and persist on the earth as it happens to exist in the current 50K year period it occurs in. It's not clear that humans have even slight ability to be perfectly rational analytic engines, as opposed to unique animals responding to desires and fears. this is why it's so silly when "rationalists" try to appear as so above all the other lowly humans, as though escaping human nature is even an option.


Uh-huh. Rationality is open-ended, we're apparently not very good at it and room for improvement is plentiful. However, I can still try to be rational, and approve of rationality.

see that? you didnt even read what I wrote and responded to something else. then I'm not able to not be snarky about it.

My apologies. But are you really saying that we're not even able to try to be rational, or to improve? "Perfect rationality" sounds like "perfect knowledge", it's a mind-boggling concept belonging to a such a far distant future that we'll probably revise the concept away before we get anywhere near it. So why present it as a goal? Being slightly more rational is a practical goal, unless you're saying human nature won't allow even that much.

> My apologies. But are you really saying that we're not even able to try to be rational, or to improve?

not at all

> "Perfect rationality" sounds like "perfect knowledge", it's a mind-boggling concept belonging to a such a far distant future that we'll probably revise the concept away before we get anywhere near it.

my statement refers to a general vibe from people who call themselves "rationalists" are going on the assumption that they are rational, while everyone else is not. Which is ridiculous. everyone "tries" to be rational. of course everyone should "try" to be rational. That's what everyone is doing most of time regardless of how poorly we judge their success.

> Being slightly more rational is a practical goal, unless you're saying human nature won't allow even that much.

Everyone should be "slightly more rational". The rationalists state that they *are* more rational, and then they go on to have fixations on such "rational" things like proving that "race" is real and determines intelligence. Totally missing what their brains are actually doing since they are so "rational".


Accusing people of using generative AI is definitely one of those things you have to be careful with, but on the other hand, I still think it's OK to critique writing styles that are now cliche because of AI. I mean come on, it's not just the negative-positive construction. This part is just as cliche:

> It is like having a Lego set where the bricks refuse to click if you are building something structurally unsound.

And the headings follow that AI-stank rhythmic pattern with most of them starting with "The":

> The “Frankenstein” Problem

> The Basic Engine

> The Ignition Key

> The Polyglot Pipeline

I could go on, but I really don't think you have to.

I mean look, I'm no Pulitzer prize winner myself, but let's face it, it would be hard to make an article feel more it was adapted from an LLM output if you actually tried.


Wouldn't this still result in just two paragraph elements? Yes, the first gets auto-closed, but I don't see how a third paragraph could emerge out of this. Surely that closing tag should just get discarded as invalid.

edit: Indeed, it creates three: the </p> seems to create an empty paragraph tag. Not the first time I've been surprised by tag soup rules.


Yeah, Hacker News does that. I've heard you can simply edit the title after submitting to fix Hacker News' "fixes" but I've not submitted enough things to give it a try.

This is correct (I've done it a few times). I think there's an edit window though and at 8 hours we're well outside that.

The other option (e.g. when it's not your submission) is to email the mods, which I've just done, and they will fix it up if appropriate.


This doesn't make any sense to me.

If you wanted to verify the contents of a dependency, you would want to check go.sum. That's what it is there for, after all. So if you wanted to fetch the dependencies, then you would want to use it to verify hashes.

If all you care about the is the versions of dependencies, you really can (and should) trust go.mod alone. You can do this because there are multiple overlapping mechanisms that all ensure that a tag is immutable once it is used:

- The Go CLI tools will of course use the go.sum file to validate that a given published version of a module can never change (at least since it was depended on, but it also is complementary with the features below as well, so it can be even better than that.)

- Let's say you rm the go.sum file. That's OK. They also default to using the Go Sum DB to verify that a given published version of a module can never change. So if a module has ever been `go get`'d by a client with the Sum DB enabled and it's publicly accessible, then it should be added to the Sum DB, and future changes to tags will cause it to be rejected.

- And even then, the module proxy is used by default too, so as soon as a published version is used by anyone, it will wind up in the proxy as long as its under a suitable license. Which means that even if you go and overwrite a tag, almost nobody will ever actually see this.

The downside is obviously all of this centralized infrastructure that is depended on, but I think it winds up being the best tradeoff; none of it is a hard dependency, even for the "dependencies should be immutable" aspect thanks to go.sum files. Instead it mostly helps dependency resolution remain fast and reproducible. Most language ecosystems have a hard dependency on centralized infrastructure, whether it is a centralized package manager service like NPM or a centralized repository on GitHub, whereas the centralized infrastructure with Go is strictly complementary and you can even use alternative instances if you want.

But digression aside, because of that, you can trust the version numbers in go.mod.


> If you wanted to verify the contents of a dependency, you would want to check go.sum

You're right, but also TFA says "There is truly no use case for ever parsing it outside of cmd/go". Since cmd/go verifies the contents of your dependencies, the point generally stands. If you don't trust cmd/go to verify a dependency, then you have a valid exception to the rule.


Agreed. Arguably, though, it would be much more reasonable to trust cmd/go to verify a dependency than it would to trust your own code. A lot more effort is put into it and it has a proper security process established. So I think the point is, if you find yourself actually needing to verify the go.sum, not by using cmd/go, you are very likely doing something wrong.

A local cache of sums are also stored in (iirc) $GOCACHE, so even if you delete go.sum from the project, the local toolchain should still be able to verify module versions previously seen without needing to call out to the Sum DB.

Probably unpopular, but I just use Bazel and pick the versions of software I use.

I know the current attitude is to just blindly trust 3rd party libraries (current and all future versions) and all of their dependencies, but I just can't accept that. This is just unsustainable.

I guess I'm old or something.


Go MVS does not require you to blindly trust 3rd party libraries. Certainly not "current and all future versions". Go modules also offer hermetic and reproducible dependency resolution by default.

I dunno about Gemini CLI, but I have tried Google Antigravity with Gemini 3 Pro and found it extremely superior at debugging versus the other frontier models. If I threw it at a really, really hard problem, I always expected it to eventually give up, get stuck in loops, delete a bunch of code, fake the results, etc. like every other model and every other version of Gemini always did. Except it did not. It actually would eventually break out of loops and make genuine progress. (And I let it run for long periods of time. Like, hours, on some tricky debugging problems. It used gdb in batch mode to debug crashes, and did some really neat things to try to debug hangs.)

As for wit, well, not sure how to measure it. I've mainly been messing around with Gemini 3 Pro to see how it can work on Rust codebases, so far. I messed around with some quick'n'dirty web codebases, and I do still think Anthropic has the edge on that. I have no idea where GPT 5.2 excels.

If you could really compare Opus 4.5 and GPT 5.2 directly on your professional work, are you really sure it would work much better than Gemini 3 Pro? i.e. is your professional work comparable to your private usage? I ask this because I've really found LLMs to be extremely variable and spotty, in ways that I think we struggle to really quantify.


Is Gemini 3 Pro better in Antigravity than in gemini-cli ?

For coding it is horrible. I used it exclusively for a day and switching back to Opus felt like heaven. Ok, it is not horrible, it is just significantly worse than competitors.

Although it sounds counter-intuitive, you may be better off with Gemini 3 Fast (esp. in Thinking mode) rather than Gemini 3 Pro. Fast beats Pro in some benchmarks. This is also the summary conclusion that Gemini itself offers.

Unfortunately, I don't know. I have never used Gemini CLI.

The MVS choices will be encoded into the go.mod; you may have been correct in the past, but as the post mentions transitive dependencies have been incorporated since Go 1.17. So yes, really: the only point of go.sum is to enable checking the integrity of dependencies, as a nice double-check against the sumdb itself.

There's no obvious reason for an end used to switch to Wayland if there isn't any particular problems with their current setup, the main improvements come down to things X11 never supported particularly well and are unlikely to be used in many existing X11 setups. My big use case that Wayland enabled was being able to dock my laptop and seamlessly switching apps between displays with different scale factors. And as an added bonus my experience has been that apps, even proprietary ones like Zoom, tend to handle switching scale factors completely seamlessly. It's not that high of importance, but I do like polish like this. (Admittedly, this article outlines that foot on Sway apparently doesn't handle this as gracefully. The protocols enable seamless switching, but of course, they can't really guarantee apps will always render perfect frames.)

OTOH though, there are a lot of reasons for projects like GNOME and KDE to want to switch to Wayland, and especially why they want to drop X11 support versus maintaining it indefinitely forever, so it is beneficial if we at least can get a hold on what issues are still holding things up, which is why efforts like the ones outlined in this blog post are so important: it's hard to fix bugs that are never reported, and I especially doubt NVIDIA has been particularly going out of their way to find such bugs, so I can only imagine the reports are pretty crucial for them.

So basically, this year the "only downsides" users need to at least move into "no downsides". The impetus for Wayland itself is mainly hinged on features that simply can be done better in a compositor-centric world, but the impetus for the great switchover is trying to reduce the maintenance burden of having to maintain both X11 and Wayland support forever everywhere. (Support for X11 apps via XWayland, though, should basically exist forever, of course.)


> having to maintain both X11 and Wayland support forever everywhere

I don't get why X11 shouldn't work forever. It works today. As you said, there's no obvious reason for an end user to switch to Wayland if there isn't any particular problems with their current setup. "Because it's modern" and "Because it's outdated" just aren't compelling reasons for anyone besides software developers. And "because we're going to drop support so you have to switch eventually" is an attitude I'd expect out of Apple, not Linux distributions.


Sometimes gnome developers out-apple apple in their attitudes, fwiw.


That was the first thing I noticed when I recently went back to messing with Linux distros after 15 years. Booting into Ubuntu and having to use Gnome Tweaks or whatever it’s called for basic customizations was incredibly confusing considering Linux is touted as being the customizable and personal OS. I doubt I’ll ever give Gnome another try after that.


Same, so I switched to KDE and life has been good.


I get the impression gnome3 is loosely a clone of osx, I much prefer a windows-esc desktop. I’ve never tried kde but feel pretty at home with xfce or openbox. YMMV, but if you have the time they’re worth trying if you’re a recent windows refugee.


GNOME is a much closer match for iPadOS than it is macOS due to how far it goes with minimalism, as well as how it approaches power user functionality (where macOS might move it off to the side or put it behind a toggle, GNOME just won’t implement it at all). Extensions can alleviate that to a limited extent, but there are several aspects that can’t be improved upon without forking.


Funny that you mention this, because broadly GNOME is seen as Linux' MacOS, and KDE as Linux' Android. At least in terms of user customization.


Last time I ran Linux as a daily driver, it was the opposite. Maybe my graybeard is showing.


X11 as a protocol will probably continue to work ~forever.

X11 as a display server will continue to work ~forever as long as someone maintains a display server that targets Linux.

KDE and GNOME will not support X11 forever because it's too much work. Wayland promises to improve on many important desktop use cases where X.org continues to struggle and where the design of X11 has proven generally difficult to improve. The desktop systems targeting Linux want these improvements.

> "Because it's modern" and "Because it's outdated" just aren't compelling reasons for anyone besides software developers.

I can do you one better: that's also not really compelling to software developers either most of the time. I beg you to prove that the KDE developers pushed Wayland hard because they badly wanted to have to greatly refactor the aging and technical debt heavy KWin codebase, just for the hell of it. Absolutely not.

The Wayland switchover that is currently ongoing is entirely focused on end users, but it's focused on things they were never able to do well in X11, and it shows. This is the very reason why Wayland compositors did new things better before they handled old use cases at parity. The focus was on shortcomings of X11 based desktops.

> And "because we're going to drop support so you have to switch eventually" is an attitude I'd expect out of Apple, not Linux distributions.

Yeah. Except Apple is one of the five largest companies in the United States and GNOME and KDE are software lemonade stands. I bet if they could they would love to handle this switchover in a way that puts no stress on anyone, but as it is today it's literally not feasible to even find the problems that need to be solved without real users actually jumping on the system.

This isn't a thing where people are forcing you to switch to something you don't want under threat of violence. This is a thing where the desktop developers desperately want to move forward on issues, they collectively picked a way forward, and there is simply no bandwidth (or really, outside of people complaining online, actual interest) for indefinitely maintaining their now-legacy X11-based desktop sessions.

It actually would have been totally possible, with sufficient engineering, to go and improve things to make it maintainable longer term and to try to backport some more improvements from the Wayland world into X11; it in fact seems like some interested people are experimenting with these ideas now. On the other hand though, at this point it's mostly wishful thinking, and the only surefire thing is that Wayland is shipping across all form factors. This is no longer speculative, at this point.

If you really want to run X.org specifically, that will probably continue to work for a decently long time, but you can't force the entire ecosystem to all also choose to continue to support X.org anymore than anyone can force you to switch to Wayland.


> I don't get why X11 shouldn't work forever.

Sure, assuming nothing else changes around it maybe. It'll work in the sense retrocomputing works.

However, the people who used to maintain Xorg are the ones who created Wayland. Xorg is largely neglected now, it still works mostly by luck.


I mean, because maintaining software is hard and costly, and a lot of this is developed by enthusiasts in their spare time?

Supporting legacy stuff is universally difficult, and makes it significantly harder to implement new things.


At the end of the day these developers are almost entirely volunteers. Codebases that are a mess, ie X11, are not enjoyable to work on and therefore convincing people to use their discretionary time on it is more difficult. If there wasn't Wayland the current set of developers on Wayland might not have been doing DE work at all.

Attracting new contributors is an existential problem in OSS.


This was my motivation for switching too; better screen size management. Things don't scale weirdly when I plug a 4k laptop into a 1080p monitor. Otherwise I'm not sure I'd advise people switch


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: