Bazzite is so much worse than SteamOS in terms of usability. I installed it on my deck and it was full of bugs, the ISO image size is the biggest I have ever seen (~10G), KDE crashes way too often, invoking keyboard doesn't work half the time, comes with so much bloat (waydroid being one of them). It's a joke to even call it a better "experience".
Also, immutability absolutely sucks for daily drive. I have to wait for 10mins for rpm-ostree to install a package while a normal distro can do it in seconds. Immutability makes sense in case of routers, VMs running core services (such as a reverse proxy, dns, etc)
> Also, immutability absolutely sucks for daily drive. I have to wait for 10mins for rpm-ostree to install a package
That's not a problem with immutability in general... rpm-ostree is just horrendously slow. On nixos I can install a new package in a few seconds and start using it without a reboot.
(I used Silverblue for a bit before switching to NixOS so I know your pain)
The idea is that you do not install packages, at all. Instead, you would use Flatpak and the like, plus mutable containers (Distrobox or whatever) if needed.
Flatpak wants to download a gazillion dependencies before the actual package. And even the original package size is huge, compared to apk, deb, rpm, etc. Stop with the bloat. Native package managers are always going to be fast, easy to use and have a minimal size
> Native package managers are always going to be fast, easy to use and have a minimal size
They become necessary when you are dealing with Windows style "applications" that drag in enormous amounts of dependency. I had Libreoffice fall over on Void Linux because of some JVM issue. You think I want to debug a JVM issue in 2020-something to type some text? No thanks - flatpak.
Standard Debian packages share their dependencies. Every Flatpak package is a special snowflake with its own dependencies. That's the point of it. But it makes download and install sizes enormous.
Flatpaks also share dependencies. Usually not quite as much as distro-specific packages, but it's wrong to suggest that each Flatpak ships all dependencies separately.
I know this was sarcasm, but my experience is that I download much less with Debian's APT than I did with Flatpak.
I can think of two explanations: (1) Debian packages have more shared parts, and (2) they have optional dependencies ("recommends" and "suggests") which I disable by default. Because of (1), there will be many library packages to download, but the overall volume is reduced.
This. Installing packages via rpm-ostree is something one can do, but in most cases it should not be necessary. It's more of an escape hatch than an everyday tool. Most additional software will be installed in the user's home directory.
The uBlue fellas do make it a point in their docs, OSTree is a last resource install method, Flatpaks first, (or Brew for CLI), Other types of container second, last OSTree
> Simple, right? This copies a file to run a Nginx container as a quadlet and a config into the /etc folder. This brings GitOps to my OS. With this whenever my machine starts, whether for the first time or millionth time, its going to be configured to work exactly as expected with no extra work or additional configuration.
That isn't GitOps. That's immutable file systems. Make it as immutable as you want, that has nothing to do with GitOps. You need the other half of the story, the infrastructure-as-code and the deployment.
After reading the article I don't get why I need to know about bootc. It's something to do with immutability like a couple of OSs I've never used, of which I've only heard of 1.
Most Linux distributions today have moved all the distribution-specific system files under the /usr hierarchy. Which means that a Linux distribution consists mostly of the /usr and /boot directories. In addition, you have files in /var and /etc that can be modified by the local system administrator and you have /home (which is actually /var/home internally) where regular users can save local files.
The local admin can still influence the system by editing files in /etc but she cannot mess with the files in /usr. The key benefit is that distributors can roll out very well-defined Linux distributions that still allow for local modifications in non-critical areas.
The main takeaway is that you can build your OS image like any OCI image, and you can boot these OCI images.
I've been using immutable OS for over 2 years now and the main advantage to me is the ease of rollback. I even started using it professionally in my work with the main selling point being easier lifecycle management.
Bootc is just an advancement of this, it's a next generation rpm-ostree. I could already boot my own OCI images before.
Okay, now I am concerned because almost every time somebody tells me "nobody is forcing you to use this new technology", in 5 or so years (if this technology survives) I am starting to get forced to use this new technology because it's "mature enough, and is basically the new industry standard, what you are currently using is going to be deprecated in a couple of years". Apparently, it's because if someone wants to make their new way the only way, they have to tell people "of course we're not forcing anyone to do it this way" for the first several years, while keeping working on rooting the existing ways out.
Anyway, I digress; what I actually would like to know is the benefits/downsides compared to e.g. immutable boot partitions, or whatever. After all, I don't generally build my own OS images, neither in .iso or in OCI-image format, I take and use off-the-shelf ones; but it generally involves dd-ing them onto the /dev/sdX at the very least. Using prepared .vdi's is another option.
I generally don't build my own OS images either, because it's such a hassle and adds such overhead to lifecycle management.
But I believe that building OCI images makes the overhead much less. Mainly because the OS I want to use already have a base image I can use, so my build is only a few lines of customization.
Your first paragraph is kinda funny to me because 10 years ago I felt like an outsider supporting systemd, and now it's everywhere.
Sounds interesting, but since I haven't been in touch much with this topic I ask myself: Does this have any benefit for my personal home-computer usage?
For a long time I have the urge to try out Nix, because I clutter up my computer way too fast and therefore often get mad and just install a fresh system. This works fine with my files, but there are always applications which I forget about and forget to save configs. So having this all in a git repo to spin it up fast would be nice. Is bootc, fedora silverblue and so forth trying to achieve something similar?
I think bootc is exactly what you're looking for. I use it[1] for configuration like you mentioned but also for:
- Installing codecs from third-party repositories. This is especially nice to do in CI because you get a build failure if packaging drift happens.
- Installing out-of-tree drivers. Again, you get a build failure in CI if an out-of-tree kernel module won't build. In addition, you can use multi-stage builds (see the Dockerfile in my repo for an example) to avoid pulling dependencies into your final system image. This saves me from having the 70 or so RPM packages that are required for building NVIDIA drivers installed on my PC.
It's not as ambitious as NixOS but I think it gives a lot of the same benefits with far less effort.
Okay, but at that point why bother with the intermediate OCI images? Especially with nix, if you're gonna use Nix you may as well build the OS directly (i.e. use NixOS).
OCI isn't a particularly good image format, the only thing it has going for itself is that it's the thing Docker uses. I would absolutely not be surprised if 90% of future bootc OCI images are built with Dockerfiles.
I want to know about - or rather, use - bootc. Unfortunately, the problem is this:
FROM quay.io/fedora/fedora-bootc:41
If you're using Fedora (or maybe RHEL) and happy to use premade base images then it's apparently great. The moment you step off that happy path, good luck; nobody has written the code, and the docs are inadequate to do it yourself. Or at least, they were for me; I wanted to make an Alpine bootc image but everything was shades of "now start from this Fedora image, install this magic package with the systemd integration, invoke the rpm wrapper, and it all works!" which was kinda a problem trying to integrate with a system that had none of those things. It was annoying because I'm pretty sure the tech is actually distro agnostic, but it was too underdocumented to use.
Am I not a member of the community? I am trying to build bootc images, but I can't work with nothing.
Just now, I thought I'd go check and see if it'd improved, and guess what? It's actually worse than I remembered. There's a list of distros using it at https://github.com/bootc-dev/bootc/blob/main/ADOPTERS.md - but actually the non-Fedora distros they list are RHEL and HeliumOS, which is a CentOS Stream derivative. (There's a list of ostree users, but that's not the same and they even admit that this is more of a 'hey these use related tech and someday maybe they can use this'). So I went looking through docs, and found https://bootc-dev.github.io/bootc/installation.html which again says that this really only supports Fedora/CentOS/RHEL, but does link to the issue tracking work for others to use it. That's https://github.com/coreos/bootupd/issues/468 , which is 2 years old and... basically is, again, a bunch of different folks saying they'd like to make it work on different distros and getting nowhere. There is one person who posted on https://github.com/bootc-dev/bootc/issues/865 and who claims to have converted an ostree Arch system to using bootc, but they didn't post steps to reproduce. Actually clicking around I eventually found their repo https://github.com/frap129/arch-bootc/tree/main which again doesn't say how to do it, but does appear to contain their code, so maybe I can reverse-engineer from there...
Anyways.
The community would very much like to step up, but we can only step up over so high of a learning curve, or possibly so high of a porting curve (a lot of comments indicate being tied to rpm integration).
I do see that the poster mentions that "Of course all of this is mitigated by using Fedora's boot-related packages, but that certainly is only a workaround to get it running at all."
If you're interested, I can talk to the bootc team to document what needs to be done and how it can be more clearly stated. Do you know rust enough to get involved if i can convince the group to get the party started ?
I don't know rust. I had assumed that all that was needed was to feed bootc-image-builder an image with files in the right places and it should work; if it needs to actually run native code (not just call a shell script or something) to integrate with the package manager, then it's over my head. And... also that seems like an overly-tightly-coupled design, IMHO. Hopefully https://github.com/coreos/bootupd/issues/468#issuecomment-15... (and similar) is the path forward.
I guess I will see if I can get my head around https://github.com/frap129/arch-bootc and see if that works without any terrible hacks (like https://github.com/bootc-dev/bootc/issues/865#issuecomment-2... talking about hacking the code to replace rpm with pacman), and then assuming I don't loose steam I can write something on the relevant issue(s) outlining exactly what does work and doesn't work how to reproduce it (because currently someone even trying to see what the current state of things looks like has to navigate through several layers of pages and can still walk away uncertain). My vague plan is
[x] try building a qcow2 image from quay.io/centos-bootc/centos-bootc:stream9
[x] try building a lightly modified container image from centos stream and turn that into a qcow2
[ ] try building frap129's archlinux image
[ ] write up minimal steps to reproduce frap129's archlinux image and boot it in qemu
[ ] try building a debian image
[ ] try building an alpine image
[ ] try building a postmarketos image (my actual original goal)
and either leave a trail of documentation of how to do things that do work, or else write up exactly what breaks.
Excellent, i'm sure that maybe even if you get stuck, we can encourage/get help from the bootc people to get the ball rolling again.
There seems to be a pretty big push for bootc internal to Red Hat and from my limited experience the team is helpful. Do you have a blog where you are recording your status ?
>They can turn it on and know it will boot properly without having to worry about things users had to worry about in the past like drivers, kernel mods, or new packages breaking things – the ghosts of Linux Desktop Past.
I mean, you still need to setup your bootc image to contain those drivers and everything like that. Once you set up Debian, it will always just boot right too. This just seems like Yet Another Immutable Distro™. I guess stuff like this is a bit nicer for things like like game consoles, where you aren't actually using it like a computer and installing "normal" software, but I don't really see the how this will change the Linux desktop.
bootc is a great evolution for rpmostree IMO. Fedora Atomic had decent build tooling, but all of that was thrown away for Fedora CoreOS due to the ignition use. I was never a can of Fedora CoreOS but I really liked Atomic.
This seems to be a sensible pivot back the other direction. We don't need the CoreOS parts that never really delivered any value. And now building an system image is a simple as writing the container file.
This way of doing things probably offers little to the average Linux user. However, it's a great model for distributing 'appliance' images, or having transactional updates on your servers. In the cloud, transactional updates aren't that big of a deal, you can do a rolling replace of your instances. On bare metal machines, doing in-place transactional updates is a big win.
Rancher also has an interesting project called Elemental. It seems to be a little more portable for other distros, but I haven't played around with it.
I'll offer a less charitable framing of the whole topic of immutable / atomic distros: This is pretty much Linux distributors deciding they want to stop doing their job (or redefine what their job is to a much smaller scope). -- I'm not saying it's not justifiable that the ecosystem may need to be reshaped in that way. I'm just cautioning people from drinking the “this is the future and the future looks bright” Kool-Aid all too easily.
The job of making a Linux distribution has always been what, in an old-fashioned term, used to be called “system integration” work. They would start with a bewilderingly huge array of open-source packages, each being developed without any centralized standard or centralized control over what the system actually looks like. Then they would curate a collection of build recipes and patches for those packages.
The value a distro delivers for the user is that, for any package “foo” that their heart desires, a user can just say “apt install foo” and it'll “just work”. There will be default configuration and patches to make foo integrate perfectly with the rest of the system.
The value a distro delivers for package maintainers is: “Don't worry about the packaging. Just put your code out as open source, and we'll take care of the rest.”
The job of a distributor is extremely difficult, because of all the moving parts: People select their hardware, their packages, and they mess with the default configurations. It is no wonder at all that Linux distributions don't always succeed in their mission to truly deliver on this. But it's a huge engineering achievement that they work as well as they do, and I think we shouldn't lightly give up on that achievement.
What we have now is basically distros going: Awwwww. Fuck it. This is too hard. I'm done with this. You know what? Instead of “any package your heart desires”, you get a fixed set of packages. The ones that everyone needs regardless of what they actually do with their computer. Instead of being allowed to mess with your configuration, we'll make your rootfs read-only. (In the case of SteamOS): Instead of doing our best to make it work on your hardware, we'll tell you precisely which piece of hardware you'll need to buy if you want our software to run on it. User: Well, that's additional money I need to spend. And, how do I install my favourite app “foo”? The one I need to actually get useful work out of my computer? Distro: Don't worry, we've got you covered. We'll provide a runtime for distrobox and flatpaks. Package maintainer of “foo”: How do I get my package out in a way that perfectly integrates with distros? Distro: Make a container. Congratulations: This is additional work you have to do now, that you didn't have to do before. And about that idea of perfect integration: You can kiss that goodbye. User: I don't know. I'm also in favour of integration. Distro: That's alright. You can share and unshare stuff between containers and the host system. This, of course, is additional work you didn't have to do before. Less work for me, more work for everyone else. The future looks so bright.
In what I wrote above, I wasn't referring to NixOS or Guix. I was thinking of the other ones (SteamOS, Fedora Silverblue, OpenSuSE Aeon, Vanilla OS, etc.) -- In fact, I think it's a bit misleading to lump them together in the same category of "atomic" or "immutable". This term has come to mean way too many different things.
To be honest , most developers would much prefer to write containers or flatpak if it just works on any linux machine.
There is no free lunch , the developer might feel that his package just got into apt magically without him doing effort but the maintainer would need to do these efforts and it might not be streamlined for the developer as much as a container created by the dev himself.
It also provides more security. Flatpaks are really neat but they aren't that used in cli world in my opinion, I wanted to make a flatpak cli and I just couldn't , so I gave up
Appimage are also nice but they also have some issues , I had created appseed which basically created a static binary from dynamic binaries automatically using zapps.app but it has some issues and i am too lazy
What kind of integration do you mean? Basically the only integration that distros do is forcing all packages into one library dependency, which is something with relatively little user-facing benefit (in fact, it's mostly to make it easier for the maintainers to do security updates). This push towards appimages and the like is basically about standardising the interface between the distro and the application, so application developers don't need to rely on the distros packaging their app correctly, or to do N different packages for N different distros and deal with N different arbitrary differences between them (and if they want to delegate this packaging work like before, they can. Not all of these various packages are put out by the author of the software).
(Now, whether these various standards work well enough, is a different question. There seems to be a bit of a proliferation of them, all of which have various weaknesses ATM, so it seems there's still some improvements to be made there, but the principle is fairly sensible if you want to a) have a variety of distros and b) not have M*N work to do for M applications and N distros)
I very much work at the coalface here, and "application developers don't need to rely on the distros packaging their app correctly" occasionally happens but is most often about miscommunication. Application developers should talk to the distros if they think there's a packaging problem. (I talk to many upstreams, regularly.) Or, more often, application developers don't understand the constraints that distros have, like we need a build that is reproducible without downloading random crap off the internet at build time, or that places configuration files in a place which is consistent with the rest of the distro even if that differs a bit from what upstream thinks. Or we have high standards for verifying licensing of every file that is used in the build, plus a way to deploy security updates across the whole distro.
And likewise packagers often don't understand that the application has been extensively tested with one set of library versions and that changing them around to fit the distro's tastes will cause headaches for the developers of that application, and that they have a vendored fork of some libraries because the upstream version will cause bugs in the application. It's a source of friction, the goals are different, and users are often caught in the crossfire when it goes poorly (and when each application is packaged N times, there's N opportunity for a distro to screw something up: it's extremely rare that a distro maintainer spends anywhere near the amount of time on testing and support as the upstream developers do, since maintainers are usually packaging many different applications, while upstream is usually multiple developers focused on one project).
Software should be written robustly, and libraries shouldn't keep changing their APIs and ABIs. It's a shame some people who call themselves developers have forgotten that. Also you're assuming that distro packagers don't care, which is certainly not true. We are then ones who get to triage the bugs.
They should, but the world isn't perfect and occasionally you do actually need to apply workarounds (which application developers also dislike having to deal with, but it's better than just leaving bugs in). Distros would run screaming from the bare metal embedded world where it's quite common to take a dependency and mostly rewrite it to suit your own needs.
And I'm not saying distro maintainers don't care, I'm just saying they frequently don't have the resources to package some applications correctly and test them as thoroughly, especially when they're deviating in terms of dependencies from what upstream is working with. And much as the fallout from that should land on the distro maintainer's plate, it a) inevitably affects users when bugs appear in this process, and b) increases workload for upstream because users don't necessarily understand the correct place to report bugs.
The place where my argument is coming from is that the MxN nature is pretty much inescapable.
> What kind of integration do you mean?
See? The "integration" is something you only notice when it breaks (or when you're working through LFS and BLFS in preparation for your computer science Ph.D.) -- This kind of work is currently being done pretty well, so it rarely breaks, so people think it doesn't even exist. Also notice that a linux distro is what's both on the outside and the inside of most containers. If debian stops doing integration work, no amount of containerization will save us.
So, what kind of breakage might there be? Well, my containerized desktop app isn't working. It crashed and told me to go look for details in the logfile. But the logfile is nowhere to be found. ...oh, of course. The logfile is inside the container. No problem, just "docker exec -ti /bin/bash" to go investigate. Ah, problem found. DBUS is not being shared properly with the host. Funny. Prior to containerization I never even had to know what DBUS was, because it just worked. Now it's causing trouble all the time. Okay, now just edit that config file. Oh, shoot. There's no vi. No problem, just "apt get install vi" inside the container. Oh "apt" is not working. Seems like this container is based on alpine. Now what was the command to install vi on alpine again? ...one day later. Hey, finally got my app to start. Now let's start doing some useful work. Just File|Open that document I need to work on. The document sits on my NAS that's mounted under "/mnt/mynas". Oh, it's not there. Seems like that's not being shared. That would have been too good to be true. Now how do I do that sharing? And how does it work exactly? If I change the IP address of my NAS and I remount it on the host, does the guest pick that up, or do I need to re-start the app? Does the guest just have a weak-reference to the mountpoint on the host? Or does it keep a copy of the old descriptor? ...damn. In 20 years of doing Linux, prior to containerization, I never needed to know any of this. ...that's the magic of "system integration". Distros did that kind of work so the rest of us didn't have to.
God, yes. I did some training courses over Zoom. The presenter frequently shared pdf files we had to interact with, but the Zoom download button dropped them in the Zoom container. Figuring out how to get hold of them was a pita.
Of course, the Windows users didn't have this problem. Flatpak, etc. are objectively making the Linux user experience worse.
Those aren't particularly useful examples, though. They're all things that have been artificially seperated in containers and now there's a bunch of work to punch the right holes in that seperation, because people want the sandboxing of containers from a minimum-trust point of view, and that's pretty hard to get right. Previously this wasn't a problem, not because the distros solved it, but because there was no seperation of dbus or views of the filesystem or the like.
(Dbus, much like a lot of the rest of desktop integration, is something that has been standardised quite heavily, such that you can expect that any application that uses it will basically work with it without any specific configuration or patching, unless you've insisted on fiddling with the standard setup for some reason. It used to be that the init system was an area which lacked this standardisation, but systemd has evened out a lot of these differences, which distro and apps maintainers as well as users all benefited significantly from. Most of containerisation is basically trying to do the same with libraries as well, but most projects are also trying to achieve some level of sandbox seperation between applications at the same time)
(This is one reason why I don't much like a lot of the existing approaches here: I think the goals are admirable and the overall approach makes sense, but the current solutions fall quite short)
this is the absolute nadier of container/cloud/devsecops chinstrap hipster nonsense. we want to turn the entire system into a docker container because reasons? without ever addressing the sins of flatpak and the performance hit from containerization vs bare metal? what exactly is the win here?
"Bootc allows you to make an OS the same way you make an application, using containers!"
but why.
"There is no denying that most applications are shipped today as a Docker container"
postfix, dovecot, nginx, veloren, gnome, hell roughly 16,000 packaged applications in the EPEL repository do not insist upon themselves as containers. Forgejo doesnt require a docker container either. the only people pushing docker are developers who are trapped toiling with some teetering monstrosity of rails/gunicorn/pypi dependencies that are so odious and brittle to deploy in a normal fashion that a containerized offering is the only way people would ever use them in the first place.
is this just another attempt to market capture my desktop with a proprietary backend store like snaps? or force me to sign up for a docker account just to log into my laptop? did a docker C level write this?
Bazzite is so much worse than SteamOS in terms of usability. I installed it on my deck and it was full of bugs, the ISO image size is the biggest I have ever seen (~10G), KDE crashes way too often, invoking keyboard doesn't work half the time, comes with so much bloat (waydroid being one of them). It's a joke to even call it a better "experience".
Also, immutability absolutely sucks for daily drive. I have to wait for 10mins for rpm-ostree to install a package while a normal distro can do it in seconds. Immutability makes sense in case of routers, VMs running core services (such as a reverse proxy, dns, etc)