> This move is part of a broader effort by Canonical to improve the resilience and maintainability of core system components. Sudo-rs is developed by the Trifecta Tech Foundation (TTF), a nonprofit organization that creates secure, open source building blocks for infrastructure software.
Ubuntu continuously updates itself without permission, killing apps and losing previous state. You have the Javascript based Gnome window manager that is always bugging out. The Ubuntu packages, drivers and kernel are laughably behind Debian and even further behind mainline. Ubuntu continues to morph into something I don't believe in.
That all said, Rust is not a smoking gun for incorrect application logic. It could still happily incorrectly execute stuff with the wrong permissions or blow something up badly. I think it's also a bad idea to offer it as a drop-in replacement when clearly features are still missing since a long time [1].
> That all said, Rust is not a smoking gun for incorrect application logic. It could still happily incorrectly execute stuff with the wrong permissions or blow something up badly.
This side steps the issue which is "Does Rust help you make software more correct?" No one is arguing that Rust is perfect. There are plenty of bugs in my Rust software. The question is only -- are we better off with Rust than the alternatives?
> I think it's also a bad idea to offer it as a drop-in replacement when clearly features are still missing since a long time [1].
Your example is the Github issue page?
Look -- I agree that, say, uutils/coreutils missing locales may frustrate some users (although I almost never use them). But "close enough" is more the Unix way than we may care to realize. But especially in this instance, because sudo is not POSIX (unlike locales which are). A distro is free to choose any number of alternatives.
Ubuntu wants to lay claim to "the Rust distribution" and it's hard to blame them when Linux wants to lay claim to "the Rust kernel".
> Entirely untrue. It may happen, but there is zero consensus to port Linux to rust. Not even the tiniest bit.
... But I did not say there was a consensus to port Linux to Rust? I'm sorry you misunderstood.
Now, why would Linux want to lay claim to being 'the Rust kernel' and how is that different than Linux being rewritten in Rust? I believe that there are many reasons why Linus chose to give Rust for Linux a chance. I believe at least one of those reasons is mindshare. If Linux chose not to experiment with Rust drivers, then that mindshare might go somewhere else.
>> Ubuntu wants to lay claim to "the Rust distribution"
Notice, Ubuntu is doing a similar thing. Canonical isn't porting all of Ubuntu to Rust. It is picking and choosing bits which they can, and would like, to move to Rust. Why? Probably for similar reasons. They want to seen as friendly to the Rust mindshare.
Now, why would Linux want to lay claim to being 'the Rust kernel' and how is that different than Linux being rewritten in Rust?
This isn't a thing. Linux isn't laying claim to any such assertion.
If you want to know Linus's reasons, then read the LKML. He's quite open in all of his thoughts in this regard, and it has nothing to do with labeling Linux 'the Rust kernel'.
I don't know if this is some weird sort of advocacy, or you're just listening to a lot of over the top rust people, but nothing you're saying here is real.
>> ... I believe that there are many reasons why Linus chose to give Rust for Linux a chance. I believe at least one of those reasons is mindshare. If Linux chose not to experiment with Rust drivers, then that mindshare might go somewhere else.
Again, very sorry you misunderstood me. However, I am now pretty certain one of your difficulties is that you stop short of reading my next sentence, and then my next sentence after that. See my quoted comments above. I made very clear these are strictly my beliefs.
> If you want to know Linus's reasons, the read the LKML.
Perhaps when I said "I believe" I was being too subtle about what "my beliefs" are or what "beliefs" mean.
I hope you would agree -- just because one has expressed certain technical reasons/desires does not mean that there were not any unexpressed social reasons/desires, or what philosophers also call "higher order volitions" (long term volitions, or volitions about volitions), for one's actions.
Now -- I do not know but I do believe there may be social reasons for Rust being adopted in the Linux kernel, because I have read the LKML and I have reasoned about why Linux is experimenting with Rust drivers from things Linus and others have said.
Feel free to disagree, of course, but, in the future, please make more of an effort to avoid mischaracterizing me again.
> Drop the "mischaracterising" routine. You're asserting specific things, so expect to get challenged when they're nonsense.
If I said "Coca Cola wants to lay claim to being the best cola soda in the world", I'm sure you would also say to me: "Patently untrue. Where exactly in Coca Cola's public statements are we to find that statement?!", instead of, perhaps reflecting, and asking yourself -- is that a reasonable belief for one to hold, given Coca Cola's marketing?
If I am not conforming to your expectations, perhaps it is because your expectations that need a reset.
Ubuntu wants to lay claim to "the Rust distribution" and it's hard to blame them when Linux wants to lay claim to "the Rust kernel".
You stated this as fact, as an active statement and goal. It isn't. At all. It's made up fantasy.
Trying to reframe things after, by changing that statement into "oh, that's just an idea I had!" and then blaming others, is invalid and dishonest.
You seem to want to blame others for your made up, untrue statements being challenged. Give it a rest. Your attempts to blame shift will gain no traction here.
You stated something as fact that is not. You were wrong to do so. You are wrong to blame me for pointing it out. You are wrong to continue complaining.
> You stated something as fact that is not. You were wrong to do so. You are wrong to blame me for pointing it out. You are wrong to continue complaining. You are wrong. Clear?
Ugh. Well, I suppose it must seem very unfair to live a life without the benefit of figurative language and/or subtextual meaning. Know that I'm praying for a cure.
> This side steps the issue which is "Does Rust help you make software more correct?" No one is arguing that Rust is perfect. There are plenty of bugs in my Rust software. The question is only -- are we better off with Rust than the alternatives?
There is a lot of embedded knowledge in existing implementations, Rust deals with just one small class of bugs but drops a lot of this knowledge in the process.
I would generally be in favour of just introducing better memory management to C/C++ and hard enforcing it in particular repositories.
> There is a lot of embedded knowledge in existing implementations,
Agree. The question whether to rewrite and/or whether to use any new implementation should take this fact into account.
> Rust deals with just one small class of bugs but drops a lot of this knowledge in the process.
Hard disagree. Rust explicitly deals with several very important classes of bugs (memory safety and concurrency), and also aids correctness via other helpful design features like tagged unions and immutability by default. But Rust, the language, does not drop any knowledge in the process, though any decision to rewrite in any language may drop some knowledge, and/or may create new bugs, in the process.
> I would generally be in favour of just introducing better memory management to C/C++ and hard enforcing it in particular repositories.
This is really easy to say, but in practice it just hasn't worked out, and there is loads of empirical evidence to back that up.[0] It is not as if market incentives don't exist to create better C/C++ code.[1] If you have a way to do it better, I have no doubt Google, half a dozen other hyper-scalers, and the US government will pay you handsomely for your solution. But, at this point in time, if this is the solution, I'm afraid it's time to put up or shut up.
> This is really easy to say, but in practice it just hasn't worked out, and there is loads of empirical evidence to back that up.[0] It is not as if market incentives don't exist to create better C/C++ code.[1] If you have a way to do it better, I have no doubt Google, half a dozen other hyper-scalers, and the US government will pay you handsomely for your solution. But, at this point in time, if this is the solution, I'm afraid it's time to put up or shut up.
It really wasn't too difficult to get high reliability for memory management in C/C++ which is also concurrency safe (I have active projects running for years like this). The difficulty was enforcing it so that you are reduced to a subset of the language and it has to be followed with discovery at compile time.
The trap I'm concerned we are falling into is the "just re-write the C/C++ project in Rust". I still believe the solution is in enforcing better practices at compile time.
> Ubuntu continuously updates itself without permission...
It does default to installing security updates automatically. However, this is completely configurable.
It isn't reasonable to have different default behaviour - otherwise the majority of users would be vulnerable from a security perspective.
If you want different behaviour, just configure it as you wish.
> The Ubuntu packages, drivers and kernel are laughably behind Debian and even further behind mainline.
This just isn't a reasonable description of reality.
Unless you're referring to an Ubuntu LTS, in which case, of course it is: that's the entire point of Ubuntu LTS, and Ubuntu users have the choice of using the six monthly non-LTS releases if they want more up-to-date packages, which Debian users do not have.
> It does default to installing security updates automatically. However, this is completely configurable.
Man, but have you personally tried to disable it?
Did you stop apt-daily.service apt-daily.time apt-daily-upgrade.service and apt-daily.timer? Did you repeat the same but masking and disabling those services. Don't forget to repeat that for unatended-upgrades.service. Even after that whenever our CI fails an apt-get we have a pstree output to figure out what other dark pattern canonical came up with.
This whole debacle made me consider RedHat for my next install, and I use Ubuntu for almost 2 decades. It became unreliable in servers.
Dont get me started on the lack of security updates on "multiverse" packages which starts to include more and more packages and thus LTS means less and less. This is not innocent but so you buy Ubuntu One.
Their answer will be "just make your server robust to restarts bro", not really understanding the fact that some stuff simply cannot be restarted. We have Ubuntu desktop running a robot arm (not our choice of OS, but the manufacturer). Mid-operation snap decides to kill the robot that would otherwise be happy operating away and sending stats to the cloud.
I'm personally moving to Debian. It's 99% how Ubuntu used to be and most Ubuntu stuff is just a .deb that is relatively compatible.
> Man, but have you personally tried to disable it?
Sure. It's just a one line change in the configuration file (/etc/apt/apt.conf.d/50unattended-upgrades). Or, if you're doing a mass deployment, just don't install the unattended-upgrades package.
> figure out what other dark pattern canonical came up with
The mechanism is inherited from Debian. It isn't Canonical's architecture.
If you want to hack internals to do things in a more complicated way, then that's up to you, but you can't then complain that it's unnecessarily complicated.
> Sure. It's just a one line change in the configuration file (/etc/apt/apt.conf.d/50unattended-upgrades). Or, if you're doing a mass deployment, just don't install the unattended-upgrades package.
That answer shows you have not seen that pattern fail.When that fails or is overwritten by an update, remember my comment.
Ironically you just added another way to configure a simple thing, proving my point.
> It isn't reasonable to have different default behaviour - otherwise the majority of users would be vulnerable from a security perspective.
A better default behaviour would be to alert the user and allow them to choose to indefinitely defer by "accepting the risk". Some setups, rightfully or wrongfully, have a very long running time and cannot be restarted.
> If you want different behaviour, just configure it as you wish.
I'm not sure if it changed, but they made it extremely difficult on purpose. You can stop snap from updating, but then lots of other things also break.
> This just isn't a reasonable description of reality.
> A better default behaviour would be to alert the user and allow them to choose to indefinitely defer by "accepting the risk".
That would be terrible UX and is exactly contrary to Ubuntu's philosophy, which is to do the right thing by default.
The alternative is to bombard the user with questions that they're generally not in a position to understand, and force them to receive an education on stuff that doesn't matter to most users before they can use their computer.
> That would be terrible UX and is exactly contrary to Ubuntu's philosophy, which is to do the right thing by default.
Even in Windows (or at least it used to be), the decision to perform an update now was a user decision. Just killing off applications without warning is the worst UX ever. Randomly killing stuff off is the opposite of what I want my OS doing.
> The alternative is to bombard the user with questions that they're generally not in a position to understand, and force them to receive an education on stuff that doesn't matter to most users before they can use their computer.
It doesn't have to be like that. It could be: "Do you want to update now? The following programs are affected and will be restarted: X, Y, Z. [Learn more]" The answers could be "Yes", "Remind me on next boot", "Remind me later" (offers common delays, i.e. 1 hour, 1 day, 1 week).
What is should never do is take the power away from a user. I saw an Ubuntu user's system restart their snap programs in the middle of delivering a conference presentation without warning. That was the worst way that could have been handled.
> I saw an Ubuntu user's system restart their snap programs in the middle of delivering a conference presentation without warning.
It's been years since they added warnings for upcoming snap updates. There's also "refresh awareness", which defers updates (to a limit, with warnings before exceeding the limit) while a user is using an app.
I meant "smoking gun" from a cyber security perspective, i.e. the conclusion or the final part of the investigation. "magic bullet" would also work here too though.
You seem to have inferred the wrong meaning of "smoking gun" and that's why your usage above doesn't make sense.
There's no valid reason cyber security people would take a well known idiom and repurpose it as you imply, and a quick Google suggests they haven't done this.
Not sure what OP was referring to, but snaps are indeed a ridiculous problem.
There's no control of when snaps update, Ubuntu has explicitly said they will never add this.
There was no way to disable snap auto-updates until just last year(-ish?) when Firefox finally announced they would no longer support snaps and started telling people how to tear them out and replace them with native packages or Flatpaks. Low-and-behold, Ubuntu suddenly got the feature to disable automatic snap updates. After saying explicitly they would never allow it for years, and telling high-uptime users to instead block snap daemon network access via the firewall to avoid it.
apt won't on its own, but if you're using the official images there's probably a service running that's calling it, probably for security patches etc.
The bigger problem is upgrading packages deliberately but being surprised by the results. My team's current favorite is the upgrade process itself suddenly having new interactive prompts breaking our scripts.
> My team's current favorite is the upgrade process itself suddenly having new interactive prompts breaking our scripts.
This is how dpkg and apt have worked in Debian and Ubuntu pretty much since their inception. Look into debconf, dpkg and ucf configuration to learn how to integrate these with your automation. The mechanisms for this have existed for decades now and have not substantially changed in that time.
If you're installing software from Debian/Ubuntu repos, you can only use aptitude or apt to my knowledge. Other tools give you the ability to install DEB files you already have, and manage what's on your system currently.
And aptitude and apt are both well known for never having had a "stable" scriptable interface. In fact they themselves tell you that their commands are not stable and should not be used for scripting, despite no alternative mode or application existing.
Recently Ubuntu moved to apt 3 as well, which massively overhauled the tool from apt 2. All those scripts people wrote to use apt 2 (because there was no alternative) broke recently when they now had to use apt 3.
Your understanding is just outright wrong. The `apt` command has an unstable interface so that it can improve the CLI without worrying about breaking scripts. The `apt-get` command is the stable interface for scripts. `apt` was created after `apt-get` became ossified exactly because the developers work hard to keep the interface for scripts stable.
> In fact they themselves tell you that their commands are not stable and should not be used for scripting, despite no alternative mode or application existing.
No, that's just the apt command, not the apt-get command, and the manpage for apt tells you exactly how to do this instead. It's clearly documented, so your "despite no alternative mode or application existing" claim is simply ignorant.
Please read the documentation and learn how to use the tooling before criticizing it and misleading others with claims that are outright wrong.
Ubuntu continuously updates itself without permission, killing apps and losing previous state
I've never seen this happen and I've run Ubuntu in production for years. Apt does not auto-update unless it's configured for unattended upgrades — and both Debian and Ubuntu allow you to configure unattended upgrades in apt. And unattended upgrades via apt should not kill running user processes or cause data loss.
The Ubuntu packages, drivers, and kennel are laughably behind Debian.
This is just plain wrong — even for the steelman argument of Debian unstable or testing, which are not intended for general use. Debian unstable and testing are on kernel 6.12. Ubuntu 25.04 is on kernel 6.14.
Debian stable, meanwhile, is on 6.1. Ubuntu has the far more-recent kernel.
I don't know what you mean by "drivers" — there aren't separate drivers on Linux from the kernel; they're shipped in the kernel. Ubuntu's are also more recent than Debian, since the kernel version is more recent.
With respect to packages, obviously I can't check every package version, but e.g. coreutils in Ubuntu are on 9.5, released in March 2024; systemd on Ubuntu is a version released this year (and until last month Debian unstable and Ubuntu were identical); gcc is identical; etc. While Ubuntu occasionally lags Debian unstable, it's not by much.
If you compare to actual Debian stable, it's not even close. Debian stable is ancient.
And ultimately... Why are you using Debian unstable? It's called "unstable" for a reason. It receives basically no testing. Even the "testing" version is more stable, and that's not intended to be stable at all and doesn't necessarily receive security updates. Ubuntu is less-stable than Debian stable, but far more up-to-date; Debian testing is less-stable than Ubuntu... And usually still not even as up-to-date. Debian unstable is basically untested; if you want that you'd be better served by a rolling release distro like Arch where the packages are going to be way more up-to-date anyway.
The Debian wiki cautions against treating unstable or testing releases as general purpose, so I truly don't think even this steelman is viable. [1] In fact, they refuse to even call Debian unstable a "release" since there are no release practices associated with it and the code is effectively untested.
Ubuntu is nowhere near my favorite Linux distro, but claiming it's more out of date than Debian is just FUD.
Debian is very very stable — at least, Debian stable is — and people love it for that. But the tradeoff is that everything in it is ancient. If you want something that's like Debian, but more up-to-date but slightly less stable — that's Ubuntu. If you want a rolling release, that's Arch. (And of course, there are even more-different distros like NixOS or ostree-based ones; there's the Red Hat universe of RHEL and the closer-to-bleeding-edge Fedora; etc etc.) Using Debian unstable is either a magnanimous act of sacrifice in order to help test future Debian versions, or it's self-harm.
Personally if I wanted to use a Debian-derivative on the desktop, though, I'd probably use System76's PopOS, which is basically a cleaned-up Ubuntu with some nice GNOME extensions. I'm more curious in the future to try out ostree-based distros, though, like the various Fedora Atomic ones, since they have nice rollbacks without the user-facing complexity of NixOS.
I have the hardware for a new home server waiting to be set up (as in, I don't need a new home server i'm just messing around, so once in a while i log in and configure one more service).
I tried the latest Ubuntu and it seems to be targeted at either containers or desktops. Everything I wanted to set up networking wise was a pain for my little non standard configuration.
Ended up wiping it and installing Debian instead.
As for this Rust thing, first question that comes to my mind is what features are missing from this new godly impervious to hackers by default implementation.
After years of working with Ubuntu on desktops and servers, I can tell you that for a server Ubuntu will probably always be the wrong choice.
Ubuntu seems to find it necessary to always invent some new way of doing a standard thing. Like how they use netplan for networking, a tool they invented themselves for a task that already has industry standard options available, is missing basic features those alternatives have, and adds nothing the alternatives don't also have (including any better usability).
They do this all the time, and have to eventually be dragged into the modern era when they finally get sick of having no community support for their one-off inferior tool.
In particular I'm just waiting for snaps to finally die. But at least that has some technical possibilities the alternatives don't, they just aren't functionally available in snaps yet. In another 20 years, if Ubuntu keeps at it with their unconfigurable, hardcoded, private snap registry and slow limited advancement snap portals-equivalent implementation, they might even have half as much functionality and packaged tools as Flatpak current has today.
---
If you want a decent server, Debian is a better option, even though they have some finnicky choices, and its enough like Ubuntu you might have some cross-ober familiarity.
Some of the old standbys like Fedora aren't good options because of their frequent update schedule and lack of long term support, but there are also some very good niche options if you can dig a lot more.
Also worth noting: if you want to keep the server working, you should plan on pretty much everything being in containers. It adds some complexity to what you're doing, but keeps each little experiment isolated from the others and avoids polluting the global system.
When there's Debian, Ubuntu is moot for servers and personal use (for power users at least).
One of my former colleagues used to install Ubuntu servers. I replace them with Debian when I get the chance. I was already blacklisted for Snap, so I can't re-blacklist them for going uutils and sudo-rs, and that's sad (as in Bryan Cantrell's famous talk).
Ubuntu continuously updates itself without permission, killing apps and losing previous state. You have the Javascript based Gnome window manager that is always bugging out. The Ubuntu packages, drivers and kernel are laughably behind Debian and even further behind mainline. Ubuntu continues to morph into something I don't believe in.
That all said, Rust is not a smoking gun for incorrect application logic. It could still happily incorrectly execute stuff with the wrong permissions or blow something up badly. I think it's also a bad idea to offer it as a drop-in replacement when clearly features are still missing since a long time [1].
[1] https://github.com/trifectatechfoundation/sudo-rs/issues?pag...