Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'll opine, happily, that nothing showcases the gap between the OS people and the Web people as well as someone who considers 10 months to be a significant length of time.

There is a happy day coming for the web developers, sometime this decade hopefully, when skills learned as long as 12 months ago will still be up to date.

Did you notice OpenJDK has been upgraded from v11 to v11? That sort of radical change is why I run Debian. They don't break stuff.




When I was an intern there was this old dude who had an ancient computer with the same Debian on it for like 7 years. He never got excited about new stuff, and just didn't want to break anything. I thought that was kind of funny and un-hip, back then, but now I'm like that. It's like Dad OS. :)


I feel fortunate to have learned this lesson right before grad school… I used to run Gentoo with all kinds of optimization flags. The day of a conference deadline, I mucked up my /etc/ pretty bad. Switched to Debian and never looked back - sometimes you just need something that works.


For servers, Debian makes a lot of sense. For desktop Linux, I can't imagine living with outdated package, kernels and applications and trying to hack my way around with ppa's or other systems like Flatpaks or Appimages. Rolling distro all the way.


Same opinion, I would say "it depends".

Personally I like Gentoo on the desktop especially in relation to when I code (e.g. very handy to be able to easily switch SW-versions of some packages while always using the same repository). I use it as well on some servers as root OS (I mean the one that runs mainly just the hypervisor), if they have special needs (e.g. if for some reason I want/have to use a recent version of some SW, e.g. ZFS, Kernel, firewall, QEMU, etc...).

On the other hand for VMs I usually just use Debian or Mint, as maintenance/upgrade effort is a lot lower & quicker. In some cases I still have to use PPAs but they're usually exceptions (e.g. Postgres 13 & kernel 5.10 & Clickhouse for Debian 10).


Same. And I loved Gentoo so much. Made me feel like a car mechanic, upgrading his machine with all the custom parts.

I would go back to running Gentoo if I had the time to mess around, it was a ton of fun and it really did give you a system that was truly yours. But it also needed a lot of love to keep running smoothly. And I don't miss those compile times.


I have the same story. I ran Gentoo on all my machines for 5 years (2002-2007) and after the umpteenth update which broke printing, I switched to Debian stable and never looked back.


You should use btrfs with snapshots. You won't be in that situation ever again. Just a suggestion.


That's solving the problem by adding complexity. Switching to Debian is solving the problem by taking away complexity. I know which one I'd prefer


I disagree, using a filesystem with snapshot support is hardly adding complexity (to the problem, sure the kernel code for btrfs might be more complex, but It's mainline since years and won't eat your data). I interpret your comment to suggest unwise practices for data management. Being able to do snapshots and send them off to wherever is important. This fills the void on your system that git fills for your source-code.


I'm a Dad and I use Arch. I can't stand non-rolling release Linux distros. They are constantly getting in my way because they don't track upstream.

I have multiple machines running Arch (some over a decade) with no problems. In constrast, doing dist upgrades on Ubuntu has put me into states that I could not figure out how to get out of, and thus had to do a clean install. (Granted, it's been a long time since this has happened, but mostly because I very specifically avoid non-rolling release distros.)


I've been using debian full-time for decades, but recently ended up switching to a spare Arch laptop I'd installed Arch on long ago just for poking at.

As something to make it easier to install latest-and-greatest junk it's certainly better than rolling your own like an LFS, and feels a bit less annoying than gentoo so far.

But it's nowhere near as comfortable to use as-is as debian, and definitely requires more time-wasting to figure out why things aren't configured correctly after simply installing a package w/`pacman -Su $foo` or some dependency is wrong or missing.

Hell, just the other day I thought to try building something with clang instead of gcc, after not using clang in a while, and this is what (still) happens:

  $ clang
  clang: error while loading shared libraries: libffi.so.6: cannot open shared object file: No such file or directory
  $
This kind of garbage just doesn't happen on debian. At this moment I have the impression that the average Arch install is always at least partially broken in some way.

Furthermore, the tooling in Arch isn't particularly great either.

On my first week using this Arch laptop full-time, I tripped over a `pacman -Sc` bug where it was opening every single file in /var/cache/pacman/pkg via libarchive, even .part files which hadn't been completely downloaded (which means their signatures weren't yet verified) and spitting out liblzma library errors like "no progress is possible" because it's attempting to open an .xz.part file. This is arguably a security risk (in a root process no less) in addition to a stupid bug producing a confusing error message. And I haven't even started going deep into this distro yet, frankly it's already left me with bad enough taste to not be interested in wasting more time on it.

Edit: just wanted to give props for the Arch Wiki, that has been a great resource for ages.


^ This is perfect statement why Arch will never replace workstations or server machines. Lot of people feel it's cool to fiddle around with file system/configuration to make it work (and they think thats cool Linux), it's not! You don't want to fiddle around with OS to make things work or just works out of the box. Debian is 100% suited for dev centric/work station centric environment once you configure, you can use it for years without worrying what might break tomorrow.


> once you configure, you can use it for years without worrying what might break tomorrow.

That has been exactly the case with me and Arch.

And your comment is not just wrong, but it's condescending too.

> Debian is 100% suited for dev centric/work station centric environment

You're way too overconfident. I use a Debian derivative at $work (because I don't have a choice), and we are constantly having to work around the fact that the software in its repos is ancient.


Or you can use NixOS where you define your entire system in a config file that'll enable you to have the same system configuration everywhere.

Once Nix flakes lands in stable it'll be interesting to see if NixOS can steal shares from the mainstream distros in any significant manner.


I love NixOS but the way Flakes are written, it's not stealing mindshare from the mainstream distros anytime soon. Flakes could have been a chance to write Nix comprehensibly but it reads like a layer of complication on top an already complicated language model.


> At this moment I have the impression that the average Arch install is always at least partially broken in some way.

Mine aren't.

I've never seen that kind of "garbage" either on Arch.

I'm trying to push back against dumb crap about how Dad's don't have time for shenanigans by pointing out that, hey, maybe rolling release doesn't have as many shenanigans as you think. And of course, folks come out of the woodwork with every little anecdote about how something didn't work. No distro is perfect and there are trade offs. The existence of your experience does not negate mine. In particular, here I am, more than a decade later and I never have any major issues. I always get the latest software and I don't have to bother with dist upgrades.

Getting the latest software is super important to me. It is the number 1 frustration I have with non-rolling release. This is a fundamental trade off. My point is that people over exaggerate the downsides typically associated with rolling release: that getting the latest software means more instability. That just isn't my experience.


I had been a Debian based distro user for ever (ubuntu, sidux, Debian unstable/testing... ) things were breaking all the time. Sure if you stick to stable things only break once every couple of years when you upgrade to the next version. However stable is really not suitable for desktop work, I'd have to install almost all my usual software through other means, that's not what I use a distro for.

I switched to tumbleweed 2 years back and really liked it, but zypper/rpm is awfully slow and I was missing some packages (although OBS is awesome). So I tried endeavorOS 6 month ago (arch with graphical installer) and I have to say I really like it. Not one break so far.

actually the breakage is often in configuration updates. Libreoffice is particularly bad, often it doesn't start without any message after an upgrade, which is fixed by whiping it ~/.config/libreoffice


Your clang error can probably be fixed by a pacman -Syu. Usually that sort of error is related to some libraries on your system being old and other programs being new (and compiled against new libraries), so the packages can't load the libraries properly when you execute them on your system. Doing a full update brings the libraries up to date so the programs can load them and run properly.


Oh I'm aware of this, and I'll get around to doing just that as soon as I'm in the mood to waste more of my life chasing other breakages after pacman turns my entire world upside down just so clang can run again.


If you're not willing to do full system upgrades on the regular, why are you running a rolling-release distro? That's literally the whole point.


Honestly, running pacman -Syu on an Arch machine that sat untouched for four years was a much better experience than any of the Ubuntu and Debian dist-upgrades I’ve had to suffer through. I did have to go and merge the .pacnews, but that’s it. (Granted, for Ubuntu the last one was something like 9.10 to 10.04, things might have improved since then.) So having an infrequently-updated machine is entirely feasible, as long as you don’t try partial upgrades. (Except that one time when the StrongSwan upstream decided it was a good idea to rename their units in such a way that an old configuration with ended up running the wrong IPsec daemon after an upgrade. That was a frustrating couple of hours.)


I plan on abandoning this experiment as soon as I have the time and interest.

In my comment above I mentioned this was a spare laptop, my primary laptop which ran debian abruptly failed pressing this thing into regular daily use, so I used the Arch install I originally put on it for an egpu experiment.


The thing is, it’s not that this problem can be fixed by running pacman -Syu, it is that, in half a dozen years of running Arch on several machines, the only way I could get into this particular failure state was when I ignored every bit of documentation in the name of laziness and did pacman -S thing instead of pacman -Su thing. (Or when I built things from outside the official repos and failed to keep them up to date, but I’m going to guess you’re not running a custom build of Clang, because that is the kind of pain you don’t forget.) Theoretically you might have caught a short window of inconsistent state on a package mirror, but again, from my experience, it’s always just a partial upgrade I did with my own hands. (Compared to apt, pacman is awesome, but its willingness to let you do stupid things could use some adjustment.)

On to more constructive advice—if your latest update was in the last couple of weeks, just

  pacman -Su
may work to pull your machine forward to a consistent state corresponding to the most recent installed package. If not, you can use the <https://wiki.archlinux.org/title/Arch_Linux_Archive>: get the last update date by an incantation such as

  pacman -Qi | sed -n 's/Install Date *: //p' | xargs -d '\n' -n 1 date -I -d | sort -nu | tail -1
(which I just cooked up, so there are surely better ones, or just look at the tail of /var/log/pacman.log), temporarily replace your /etc/pacman.d/mirrorlist with

  Server=https://archive.archlinux.org/repos/YYYY/MM/DD/$repo/os/$arch
and run

  pacman -Syu # sic!
Either way, you may see a little bit of breakage (though, in my experience, it’s unlikely), but nothing you wouldn’t have had to deal with when you properly installed your current set of packages in the first place.


Ubuntu upgrades are extremely messy compared to Debian. If you want a rolling-like experience, the testing channel of Debian is pretty good for that - though it does get frozen when approaching a stable release.


The dist upgrade is more of an example meant to dispel the myth that non-rolling release distros break less frequently than rolling release. Or at least, an anecdote anyway.

Jumping down from the meta level of rolling vs non-rolling, I personally find Archlinux style of packaging much much simpler than Debian's. I've writte numerous PKGBUILD files over the years and it's been dead simple. But my eyes gloss over whenever I look at Debian packages.

I'm sure the complexity in Debian is warranted for one reason or another. But it ain't for me.


Note that, Ubuntu had some rough times, and did some odd things (I distinctly remember changing gid of system groups, for example) - but it has gotten a lot better. I don't think I can remember Debian ever having any serious issues on dist upgrade. Well, there was a bit of hairpulling with the change from lilo to grub - but I think that was because I tried it early, before grub became the new standard..).

But of course, there are no perfect tradeoffs. I'm inclined to believe GNU guix (or nixos) might be the next best thing(tm) - but I've yet to put that to the test..


But what about Pop!_OS?


> I have multiple machines running Arch (some over a decade) with no problems.

This sounds extremely unlikely. Rolling vs versioned release both have breakage, but with rolling its in constant tiny ways so it's less memorable. It's also a point Arch evangelists consistently fail to bring up.


I'm not evangelizing Arch. I'm pushing back on the bullshit that "dads don't have time for rolling release shenanigans."

Either you're calling me a liar or you're saying my memory is bad. Great talk. Thanks.


OpenSuSE Tumbleweed is also rolling release and stuff works out if the box. It is rpm based, but it's the most tolerable rpm distribution imho.


I have a non-ancient computer with the same Debian (testing with a bit of unstable mixed in) on it for over 16 years now. I still get excited about new stuff, I just don't want that new stuff to break anything. :-)


I think the web people have the right idea. If it hurts, do it more often [1], right?

I've used Debian for a long time (edit: on servers), but the short (and in some cases getting shorter) release cycles of popular projects like Chromium and Rust have got me wondering if Debian just moves too slow. I'm tempted to try using Fedora as a container base image and see if it's practical to keep up. Edit: I guess Fedora is no worse than Alpine if you actually update to each new major release quickly, since they both release every 6 months.

[1]: https://www.martinfowler.com/bliki/FrequencyReducesDifficult...


No, the web people have it wrong. It's called:

- Failure to do upfront design (yay, misinterpreted agile/MVP/etc)

- Reinventing wheels over, and over, and over

Newer versions and frameworks tend to be 10% improvement, and 90% unnecessary churn.


I'd say PHP is still getting a lot of good and important improvements


PHP exactly makes my point.

PHP was a no-good, terrible, poorly-made, horrific original design for something like powering Facebook. As a result, it needed a lot of good and important improvements.

Or to be more specific, it was a great design for what it was intended to be: a pet hobby project to make it easier to make Personal Home Pages, and was never intended to be a programming language.

https://en.wikipedia.org/wiki/PHP#Early_history

The concept of putting out something this bad, and then refactoring for over a quarter-century to make it almost-usable, is exactly the definition of the webdev way.

The result is a ton of churn.


And we've been making the same "mistakes" with JS too!

But if PHP and JS are so shit and full of churn, why are they so popular and widely used? And have been used to build larger, globally successful projects?


With JS, the answer is because Netscape adopted it, and it had market share. If you wanted to code which ran in web browsers, you needed to use it. Netscape, the company, no longer exists, so I wouldn't call it all too successful an outcome. From there, momentum.

Early history is in '95, Netscape hired Brendan Eich to embed Lisp in Netscape, while simultaneously pursuing a partnership with Sun to embed Java. Sometime that year, Lisp changed into a new language, which was created and shipped in beta, by September 1995. We're talking a few months.

If we had designed a sane framework upfront -- even 6 more months design time -- the web would likely be a decade or two ahead right now.

Most of the early "improvements" to JavaScript were stop-gaps created under similar cultures, which would need to be replaced again and again. It's not that we didn't know how to make a decent programming language in 1995; it's that we didn't.

Today, I do about half of my coding in JavaScript since it runs in the browser, and therefore on all platforms. It has nothing to do with the painfully bad language design.


> the web would likely be a decade or two ahead right now

Seeing how the trajectory of web development seems to be "pile on more and more complexity, most of it unnecessary", I'm not sure if I'm supposed to be sad or happy about this.


Because they were in the right place at the right time. JS just happened to be the first web scripting language. And PHP competed against gems such as ASP (pre-.NET) and ColdFusion at the time when server-side web development was exploding.


That’s because our industry is not making choices based on merit, it makes them based on fashion.


sure, I won't disagree with that, my point was more that the updates are more than 10% useful features usually. You can definitely make a point about how long it took php to get its type system into shape, but php is not exactly just some trendy framework of the week, and there are valid reasons to pick php as your language of choice (namely that it's the lowest barrier language especially for having non technical people do their own installation on some random server/shared hosting)


Seen locally, all improvements are useful. PHP is better each year.

Seen globally, it's 10% improvement and 90% churn. One "improvement" is a half-fix, another is another half-fix, and so on, where after 5 improvements you're 97% of the way there.

In the meantime, you've broken backwards-compatibility five times, perhaps not always in the literal sense (old code still runs), but in the practical sense (old code has to be rewritten to follow modern conventions and be compatible with new code).

You've also had five learning curves.

And you have a ton of legacy stuff to support, from each of those five changes, leading to an almost unbounded explosion of complexity.


PHP and PostgreSQL get a big release every year with good improvements. That you have to wait for the next stable release of debian (or in this case the one after this because they did not upgrade to the newest one) is not a failure of the web people! We are talking about one release every year, not every few weeks! Getting them years later just shows how unrealistic sometimes these stable distros are.


> We are talking about one release every year, not every few weeks! Getting them years later just shows how unrealistic sometimes these stable distros are.

* https://deb.sury.org

* https://wiki.postgresql.org/wiki/Apt

There are times when I want slow movement, and times when I want fast movement. Debian allows for both.


Sure you can use external repositories. But why bundle old packages when everyone then has to use external repositories to not get really old versions?


Testing for one:

* QA to make sure all the interdependencies aline correctly

* making sure the upgrade process runs smoothly, and that going from version X to version Y of some package is well-tested

It also depends on the upstream release cycle of the software package. The distro may wish to use the LTS version software (e.g., OpenJDK, OpenZFS, Django, Zabbix) but developers may wish to be more bleeding edge for new functionality.


Static binaries like Snap, macOS, etc. solve “moving slowly” issue. There should be little need to update the core OS (kernel, initd, /etc stuff) often.


It is possible that if you build up a routine to always stick to the latest, it will become an automatism. This opens the door to several types of risks.

Containers are always an option.


OpenJDK is still at 11 because that’s the latest LTS version. Subsequent versions have significantly shorter upstream support lifetimes (IIRC, six months). Until another LTS is released, it’ll continue to be 11.


There's no such thing as "LTS" for the OpenJDK.

LTS is strictly a JDK/JRE vendor concept. Oracle's distributions have 11 as an LTS and will have 17 as an LTS. That, however, only matters if you are paying oracle for support.


Although technically true your statement misses a fact. Jdk 11 is still an LTS version for oracle which means that they are committed to improving that version with (security) fixes. These fixes will also be(come) available to the main OpenJDK distributions.

Therefore, as a user of any JDK distribution you can depend a bit on the LTS versions, in addition that what your distribution promises.


> These fixes will also be(come) available to the main OpenJDK distributions.

This isn't true! Oracle is under no obligation to mainline fixes from their JDK into OpenJDK.

That work is somewhat happening via IBM and Redhat, however, oracle is not doing it.

Further, new cuts of the OpenJDK are not happening for those old versions. You MUST get the fixes via a build of the OpenJDK that isn't the Oracle build of the OpenJDK (like adopt OpenJDK).

You can only depend on the "LTS" if the vender of your JDK supports LTS. There's no technically about it.


Exactly so officially you can indeed only rely on what your distribution promises you. In practice you may, with varying success, assume that if you are using a version that is LTS by one of the big distributions some fixes will land in mainline as well and therefore also in your distribution. Particularly for really critical fixes.

Of course, only officially rely on whatever agreement you have with your distribution/distributor.


This page https://adoptopenjdk.net/support.html has Java 8, 11 and 17 marked as LTS


Yes, that is LTS by the AdoptOpenJDK project. The GP was speaking of oracle LTS I presume.


In context, I'm talking about the OpenJDK project which has no concept of LTS.

LTS is strictly a JDK vendor concept. (AdoptOpenJDK is a JDK vendor). How long a version is supported depends entirely on the vendor.

A lot of vendors have adopted the oracle LTS strategy, some offering even longer support. However, it's vendor specific.

Debian is, themselves, a JDK vendor as they are building OpenJDK themselves. So what LTS means for debian is entirely up to what Debian is willing to do to support it.


Next LTS will be released on Sep 14, exactly in one month.


It means nothing that it will be released in a month. Because new software goes into unstable first, then is promoted to testing and only then to stable. So even if the next LTS release would have been released a month ago, it certainly wouldn't have made it into debian stable.



Too bad, now wait for 2023.


I think it’s rather Docker, Alpine, JDK 17 and jlink than waiting till 2023 :)


Very fair point :) All my other dependencies, using 5 year old versions is fine; it’s mostly just PHP that gets an order of magnitude less-painful to work with in each release, so being stuck a few versions behind is extra frustrating...


Is it that difficult to build and/or install a newer version that better fits your needs? I tend to stick to debian stable, but in the few cases where I need newer software I just build it myself. Using `apt build-dep [package]` and then something along the lines of `configure && make && make install` basically always works.

edit: I guess if you work somewhere with a policy of requiring you to use system packages, you're a little screwed, but that seems more of an organizational problem than a debian problem.


In this case I’m building software whose primary selling point is “trivial to install on practically any cheap-ass $1/mo shared web host, just unzip and FTP the files to your website folder” - which is the whole reason it’s written in PHP in the first place.

(The more I think about it, the more sad I am that it’s 2021 and still no other language has come close to PHP’s low barrier to entry...)


PHP 8.x is very easy to install on debian by adding deb.sury.org to apt sources.

https://deb.sury.org/


Yes. Just as an example, I use VSCode. VSCode no longer works on CentOS 7. It needs a newer glibc, so I'm going to have to recompile a lot of stuff and do some PATH fiddling to get it to work.

It's usually not that it's hard to do that for your packages. It's when their dependencies go out of date and I'm looking at having to build a Frankenstein system where I'm running one up to date system on top of a really out of date one, and trying to make the two co-exist in a single OS.

It's not insurmountable, but it's enough effort that dealing with occasional Arch breakage becomes easier. Or even something in between like Ubuntu.


Well I was referring specifically to debian and to PHP. Does VSCode really not compile on debian 10 or 11? I find that a little surprising. But yeah I agree that installing newer dependencies like glibc for VSCode would certainly be a hassle. I can't really speak much to that though since I've never used it.


VSCode is probably not the best example, because it has official packages for Debian (among many other things):

https://code.visualstudio.com/docs/setup/linux


That's because php decided it needs 6 month release cycles now. You're better off targeting php 7.4 for the next 3 years imho


PHP releases once a year. They've been doing that for almost a decade now.


In other settings I can add a new feature to one of my dependencies and start using it within days or weeks. You're telling me you prefer having to wait years to get any newly implemented feature?


Unironically yes. I would love to be done with the endless version churn and would trade many, though not all, features to get it.


So add a couple of vendor repos and jump the versions queue like everyone else. I do this for all my main dependencies... postgres, ruby, etc


Can you give an example?


Sure. In the Rust ecosystem, if you submit a pull request to a library it is very common for the maintainer to release a new version soon after. Even patches to the Rust compiler repository are released to stable in a matter of a few months (and the very next day for the nightly channel that many people use)

There's always workarounds and so forth, but it is really empowering to be able to have an impact over such a short timespan


OS should not ship your language run time. OS is a giant cargo ship. It does not turn on a dime and it is not piloted by a "lets rewrite it in rust!" crew.


But it doesn't make much sense to run an outdated PHP. I agree with the point of the top level comment, that is to put your dependencies inside docker and completely uncouple from OS dependencies.


You don't seem to know anything about the "web people" you're criticizing here. PHP skills learned two decades ago are still very relevant today. It's the Java of the web world.


I've initially learned PHP4 when PHP3 was still used. Later I learned PHP 5 which iirc was the first version which had classes. I also learned Drupal 6 and then 7.

Modern PHP code looks nothing like PHP code back then. Of course, the core syntax is mostly the same, but if you look at a framework like D6 or D7 no one would do anything even remotely like this today. PHP security has completely changed since then as well.


What criticism?


That's an excellent point: you described a difference, which is in fact real. Any perceived criticism is just that: a perception.


I thought that Java was the Java of the web world.


Java is the Java of the Java world.


I don't understand the downvotes at all. Yes, PHP 8 is different from, let's say PHP 5, but that doesn't mean knowing PHP 5 doesn't help with writing good PHP 8. Same with Python 2 vs 3.


Hmm no, not at all. My first job in 2002-2004 was using PHP4. I barely used classes (and if I did they were just my code, not from the library because I didn't need anything from PEAR) and used the original MySQL client API. Almost nothing of what I used back then would be useful today. PHP4 vs PHP8 is almost more different than Python 2 to Python 3.


Nonsense. +95% of Python 2 knowledge is transferable to Python 3. Same for PHP 4 to PHP 8.


I agree with your point, but your first word may get you many a downvote; I would remove the "nonsense" word to increase the penetration :).


There is a fine line where you need to see which weighs more, using a stable version or using an up-to-date security version, I fully support the latter. PHP 7.4 security support will end in November 2022, will Debian upgrade to 8.0 once this happens?


no, they'll start backporting security fixes. There's still php 5 versions in the wild


I read GP as considering ~4 years total and 2 years out of support a long time.


> They don't break stuff.

Unless they do due to their stance on licenses and remove mp3 support from the lame package.


OpenJDK 17 is in Debian 11 as well [0]. A decision was made to include it before its official release since the release is so close (September). An updated package will be available when it's released.

From the release notes [1]:

Debian bullseye comes with an early access version of OpenJDK 17 (the next expected OpenJDK LTS version after OpenJDK 11), to avoid the rather tedious bootstrap process. The plan is for OpenJDK 17 to receive an update in bullseye to the final upstream release announced for October 2021, followed by security updates on a best effort basis, but users should not expect to see updates for every quarterly upstream security update.

[0] https://packages.debian.org/source/testing/openjdk-17

[1] https://www.debian.org/releases/bullseye/amd64/release-notes...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: