Asking in good faith out of genuine curiosity: I kind of associate ClickHouse with Yandex. What's the present-day relationship and legal setup, and how does it jive with Western sanctions against Russia?
When I read that title, I was expecting the following story: "Academic ghostwriters", thanks to AI, are now completing online degrees by the hundreds per actual human headcount, selling the opportunity to put one's name on the "work" to fraudulently obtain a degree.
What jumps out at me is the paragraph: "Governance and leadership reforms." in the original letter sent by the government to the university.
The other stuff is hard to make sense of, but this part is crystal clear: The authoritarian government is asking the university to restructure itself along more authoritarian lines. ...essentially Trump wants continuity of reporting lines ultimately leading up to him, and going down to the individual faculty member, student, and foreign collaborating partner. That sort of thing could come in handy for all kinds of things in the future, not just the silly demands of the present.
Services are ignored by Trump for precisely the reason you mention. The big question is: What will other countries do, like Germany, who tend to export goods to the U.S. but import services. Right now, those are the countries who would rather prevent this thing from escalating, but if escalation it must be and they run out of ammunition within the scope of tariffs on goods, where will they go next?
Tariffs on services may also be less popular with the citizens. It's not obvious that the locally produced fridge is only cheaper because of tariffs. It will be more obvious that everyone non-EU based pays more. It will also be harder to control (how will EU extract tariffs for payments I do to companies with no EU presence?)
But I don't how much about it, maybe these are already solved problems. After all VAT already exists and faces similar challenges.
Services aren't traditionally part of tariffs; tariffs apply only to physical stuff moving across borders.
That being said: I work in a services-oriented business right now "exporting" services to the U.S. and the leadership of that company is seemingly getting very worried, trying to diversify their customer base out of the U.S.
If, in the cycle of retaliatory action, they run out of ammunition with tariffs on stuff, who knows what other crazy ideas will come to the surface: Tariffs on services do come to mind, maybe restrictions around recognition/enforcement of foreign-owned intellectual property,...
Tariffs on services are much harder to enforce. There's point of entry so it's harder to check.
However, some countries have a withholding tax for services provided by foreign companies. The client is responsible for withholding the amount from any payment and paying the government.
And banks play a role in the enforcement if needed.
I'll offer a less charitable framing of the whole topic of immutable / atomic distros: This is pretty much Linux distributors deciding they want to stop doing their job (or redefine what their job is to a much smaller scope). -- I'm not saying it's not justifiable that the ecosystem may need to be reshaped in that way. I'm just cautioning people from drinking the “this is the future and the future looks bright” Kool-Aid all too easily.
The job of making a Linux distribution has always been what, in an old-fashioned term, used to be called “system integration” work. They would start with a bewilderingly huge array of open-source packages, each being developed without any centralized standard or centralized control over what the system actually looks like. Then they would curate a collection of build recipes and patches for those packages.
The value a distro delivers for the user is that, for any package “foo” that their heart desires, a user can just say “apt install foo” and it'll “just work”. There will be default configuration and patches to make foo integrate perfectly with the rest of the system.
The value a distro delivers for package maintainers is: “Don't worry about the packaging. Just put your code out as open source, and we'll take care of the rest.”
The job of a distributor is extremely difficult, because of all the moving parts: People select their hardware, their packages, and they mess with the default configurations. It is no wonder at all that Linux distributions don't always succeed in their mission to truly deliver on this. But it's a huge engineering achievement that they work as well as they do, and I think we shouldn't lightly give up on that achievement.
What we have now is basically distros going: Awwwww. Fuck it. This is too hard. I'm done with this. You know what? Instead of “any package your heart desires”, you get a fixed set of packages. The ones that everyone needs regardless of what they actually do with their computer. Instead of being allowed to mess with your configuration, we'll make your rootfs read-only. (In the case of SteamOS): Instead of doing our best to make it work on your hardware, we'll tell you precisely which piece of hardware you'll need to buy if you want our software to run on it. User: Well, that's additional money I need to spend. And, how do I install my favourite app “foo”? The one I need to actually get useful work out of my computer? Distro: Don't worry, we've got you covered. We'll provide a runtime for distrobox and flatpaks. Package maintainer of “foo”: How do I get my package out in a way that perfectly integrates with distros? Distro: Make a container. Congratulations: This is additional work you have to do now, that you didn't have to do before. And about that idea of perfect integration: You can kiss that goodbye. User: I don't know. I'm also in favour of integration. Distro: That's alright. You can share and unshare stuff between containers and the host system. This, of course, is additional work you didn't have to do before. Less work for me, more work for everyone else. The future looks so bright.
In what I wrote above, I wasn't referring to NixOS or Guix. I was thinking of the other ones (SteamOS, Fedora Silverblue, OpenSuSE Aeon, Vanilla OS, etc.) -- In fact, I think it's a bit misleading to lump them together in the same category of "atomic" or "immutable". This term has come to mean way too many different things.
To be honest , most developers would much prefer to write containers or flatpak if it just works on any linux machine.
There is no free lunch , the developer might feel that his package just got into apt magically without him doing effort but the maintainer would need to do these efforts and it might not be streamlined for the developer as much as a container created by the dev himself.
It also provides more security. Flatpaks are really neat but they aren't that used in cli world in my opinion, I wanted to make a flatpak cli and I just couldn't , so I gave up
Appimage are also nice but they also have some issues , I had created appseed which basically created a static binary from dynamic binaries automatically using zapps.app but it has some issues and i am too lazy
What kind of integration do you mean? Basically the only integration that distros do is forcing all packages into one library dependency, which is something with relatively little user-facing benefit (in fact, it's mostly to make it easier for the maintainers to do security updates). This push towards appimages and the like is basically about standardising the interface between the distro and the application, so application developers don't need to rely on the distros packaging their app correctly, or to do N different packages for N different distros and deal with N different arbitrary differences between them (and if they want to delegate this packaging work like before, they can. Not all of these various packages are put out by the author of the software).
(Now, whether these various standards work well enough, is a different question. There seems to be a bit of a proliferation of them, all of which have various weaknesses ATM, so it seems there's still some improvements to be made there, but the principle is fairly sensible if you want to a) have a variety of distros and b) not have M*N work to do for M applications and N distros)
I very much work at the coalface here, and "application developers don't need to rely on the distros packaging their app correctly" occasionally happens but is most often about miscommunication. Application developers should talk to the distros if they think there's a packaging problem. (I talk to many upstreams, regularly.) Or, more often, application developers don't understand the constraints that distros have, like we need a build that is reproducible without downloading random crap off the internet at build time, or that places configuration files in a place which is consistent with the rest of the distro even if that differs a bit from what upstream thinks. Or we have high standards for verifying licensing of every file that is used in the build, plus a way to deploy security updates across the whole distro.
And likewise packagers often don't understand that the application has been extensively tested with one set of library versions and that changing them around to fit the distro's tastes will cause headaches for the developers of that application, and that they have a vendored fork of some libraries because the upstream version will cause bugs in the application. It's a source of friction, the goals are different, and users are often caught in the crossfire when it goes poorly (and when each application is packaged N times, there's N opportunity for a distro to screw something up: it's extremely rare that a distro maintainer spends anywhere near the amount of time on testing and support as the upstream developers do, since maintainers are usually packaging many different applications, while upstream is usually multiple developers focused on one project).
Software should be written robustly, and libraries shouldn't keep changing their APIs and ABIs. It's a shame some people who call themselves developers have forgotten that. Also you're assuming that distro packagers don't care, which is certainly not true. We are then ones who get to triage the bugs.
They should, but the world isn't perfect and occasionally you do actually need to apply workarounds (which application developers also dislike having to deal with, but it's better than just leaving bugs in). Distros would run screaming from the bare metal embedded world where it's quite common to take a dependency and mostly rewrite it to suit your own needs.
And I'm not saying distro maintainers don't care, I'm just saying they frequently don't have the resources to package some applications correctly and test them as thoroughly, especially when they're deviating in terms of dependencies from what upstream is working with. And much as the fallout from that should land on the distro maintainer's plate, it a) inevitably affects users when bugs appear in this process, and b) increases workload for upstream because users don't necessarily understand the correct place to report bugs.
The place where my argument is coming from is that the MxN nature is pretty much inescapable.
> What kind of integration do you mean?
See? The "integration" is something you only notice when it breaks (or when you're working through LFS and BLFS in preparation for your computer science Ph.D.) -- This kind of work is currently being done pretty well, so it rarely breaks, so people think it doesn't even exist. Also notice that a linux distro is what's both on the outside and the inside of most containers. If debian stops doing integration work, no amount of containerization will save us.
So, what kind of breakage might there be? Well, my containerized desktop app isn't working. It crashed and told me to go look for details in the logfile. But the logfile is nowhere to be found. ...oh, of course. The logfile is inside the container. No problem, just "docker exec -ti /bin/bash" to go investigate. Ah, problem found. DBUS is not being shared properly with the host. Funny. Prior to containerization I never even had to know what DBUS was, because it just worked. Now it's causing trouble all the time. Okay, now just edit that config file. Oh, shoot. There's no vi. No problem, just "apt get install vi" inside the container. Oh "apt" is not working. Seems like this container is based on alpine. Now what was the command to install vi on alpine again? ...one day later. Hey, finally got my app to start. Now let's start doing some useful work. Just File|Open that document I need to work on. The document sits on my NAS that's mounted under "/mnt/mynas". Oh, it's not there. Seems like that's not being shared. That would have been too good to be true. Now how do I do that sharing? And how does it work exactly? If I change the IP address of my NAS and I remount it on the host, does the guest pick that up, or do I need to re-start the app? Does the guest just have a weak-reference to the mountpoint on the host? Or does it keep a copy of the old descriptor? ...damn. In 20 years of doing Linux, prior to containerization, I never needed to know any of this. ...that's the magic of "system integration". Distros did that kind of work so the rest of us didn't have to.
God, yes. I did some training courses over Zoom. The presenter frequently shared pdf files we had to interact with, but the Zoom download button dropped them in the Zoom container. Figuring out how to get hold of them was a pita.
Of course, the Windows users didn't have this problem. Flatpak, etc. are objectively making the Linux user experience worse.
Those aren't particularly useful examples, though. They're all things that have been artificially seperated in containers and now there's a bunch of work to punch the right holes in that seperation, because people want the sandboxing of containers from a minimum-trust point of view, and that's pretty hard to get right. Previously this wasn't a problem, not because the distros solved it, but because there was no seperation of dbus or views of the filesystem or the like.
(Dbus, much like a lot of the rest of desktop integration, is something that has been standardised quite heavily, such that you can expect that any application that uses it will basically work with it without any specific configuration or patching, unless you've insisted on fiddling with the standard setup for some reason. It used to be that the init system was an area which lacked this standardisation, but systemd has evened out a lot of these differences, which distro and apps maintainers as well as users all benefited significantly from. Most of containerisation is basically trying to do the same with libraries as well, but most projects are also trying to achieve some level of sandbox seperation between applications at the same time)
(This is one reason why I don't much like a lot of the existing approaches here: I think the goals are admirable and the overall approach makes sense, but the current solutions fall quite short)
One problem I see with the current state of the IndieWeb having not yet "taken off" is the negative selection effect you get from having the corporate web in existence next to the IndieWeb. Back in the 90s you might write a website advertising your services as an accountant in Word, save as HTML, and upload that HTML to geocities or to webspace provided by your ISP. You can still do that today. The shocking difference is that back then it made actual business sense to do that: People would just stumble across your website. You could reach actual normies. Nowadays, because of the presence of the corporate web next to the Indie Web, the normies are trapped in the corporate web and the only eyeballs your website will attract will be the people who share in your own brand of weird. They might admire the "art" inherent in your crappy HTML, but they won't hire you as an accountant. And they're probably not strong enough in numbers for this to make any business sense.
This is actually the homepage of my accountant. “Instead of building a big website, we decided to save on the money. If you need accounting done, here is where you can reach us. It just black on white with blue links.
My brother's a carpenter, and I pushed him in the same direction. "Here's our contact details, and here's a link to the instagram where we post pictures of the cool stuff we make". I think of it as a digital business card.
Reducing the dependency tree gets a bit more complicated once you consider that now you have to satisfy not only runtime dependencies for all packages but also build-time dependencies. There may be ways of cleaning that up after a build, but next time you want to emerge a new package you'll just end up having to re-build the build-time dependencies, so in practice you'll just end up leaving them there. There is an ability to emerge packages to a separate part of the filesystem tree (ROOT="/my/chroot" emerge bla), so that you have one build-time system act as a kind of incubator for a runtime system that gets to be minimal. But you'll end up encountering problems that most other Gentoo users wouldn't encounter, having to do with the separation between build-time dependencies and runtime dependencies not being correctly made in the recipes. Personally, I had been relying on this feature for roughly the last 10 years, but there has been steady deterioration there over the years and I eventually gave up late last year.
This is a good point. I've been using Gentoo since early 2004 (the dreaded Pentium IV era, Lol). Lately, I run into this with dev-lang/tcl only being need to build dev-db/sqlite. I actually think it's pretty weird that software intended to be as widely used as sqlite with as much of a free base of supporting devs doesn't just do the extra effort to use a Makefile.
I had had Gentoo continuously in use since 2003, and only very recently moved off of it (late 2024) when I tried Void Linux. On Void, buildability from source by end users is not a declared goal nor architectural feature, but you have a pretty decent chance of being able to make it work. You can expect one or two hiccups, but if you have decent all-round Linux experience, chances are you'll be able to jump into the build recipes, fix them, make everything work for what you need it to do, and contribute the fixes back upstream. This is what you get from a relentless focus on minimalism and avoiding overengineering of any kind. It's what I had been missing in Gentoo all those years. With Gentoo, I always ended up having to fiddle with use flags and package masks in ways that wouldn't be useful to other users. The build system is so complex that it had been just too difficult for me, over all these years, to properly learn it and learn to fix problems at the root cause level and contribute them upstream. Void should also be an ideal basis for when you don't want to build the entire system from source, but you just want to mix & match distro-provided binaries with packages you've built from source (possibly on the basis of a modified build recipe to better match your needs or your hardware).
FWIW: They have provisions in their bylaws (which can only be changed with the assent of their public interest asset-locked shareholder) that restrict salaries to a level that's commonplace in the industry specifically in Germany. In Germany, software engineers and managers tend to make a lot less than they do in the U.S., certainly not an amount of money that's a meaningful tradeoff for giving up rights to dividends and other distributions.
reply