Why? It's immunologically sensible : let the immune system train antibodies on a non lethal amount of novel protein antigen, like traditional vaccines, and (i bet to your point) in stark contrast with "homeopathy" in some definitions.
Homeopathy is way more extreme, for what it's worth. The idea is to keep diluting the thing until it's basically chemically absent, with only the "nature" or the juju or whatever it is.
As a semi European it's a bit bizarre. There's a probably more of a worry with the physical destruction in Ukraine which is also semi Europe. I'd be nice if as a condition for peace talks Trump could require the Russians to stop sending missiles into apartment blocks and similar civilian targets.
Back in the day repositories had 'maintainers' who reviewed packages before they became included. I guess no one really cares in the web dev world; it's a free-for-all.
I'm curious how much review happens in Nix packages. It seems like individual packages have maintainers (who are typically not the software authors). I wonder how much latitude they have to add their own patches, change the source repo's URL, or other sneaky things.
Not a lot in most cases. You’re still just grabbing a package and blindly building whatever source code you get from the web. Unless the maintainer is doing their due diligence nothing.
Goes the same for almost all packages in all distros though.
I’d say most of us have some connection to what we’re packaging but there are plenty of hastily approved and merged “bump to version x” commits happening.
Nixpkgs package maintainers don't usually have commit rights. I assume that if one tried to include some weird patch, the reviewer would at least glance at it before committing.
I’ve never looked at the process of making a nixpkg, but wouldn’t the review process only catch something malicious if it was added to the packaging process? Anything malicious added to the build process wouldn’t show up correct? At least not unless the package maintainer was familiar and looked themself?
I am not sure I understand the distinction between the packaging and build process, at least in the context of nixpkgs. Packages in nixpkgs are essentially build instructions, which you can either build/compile locally (like Gentoo) but normally you download them from the cache.
Official packages for the nixpkgs cache are built/compiled on Nix's own infrastructure, not by the maintainers, so you can't just sneak malicious code in that way without cracking into the server.
What package maintainers do is contribute these build instructions, called derivations. Here's an example for a moderately complex one:
As you can see, you can include a patch to the source files, add custom bash commands to be executed and you can point the source code download link to anywhere you want. You could do something malicious in any of these steps, but I expect the reviewer to at least look at it and build it locally for testing before committing, in addition to any other interested party.
OCaml's opam does have a review process, although I'm not sure how exhaustive. It's got a proper maintenance team checking for package compatibility, updating manifests and removing problematic versions.
I don't think this would be viable if the OCaml community grew larger though.
Some alternative sources for other languages do it. Conda-forge has a process that involves some amount of human vetting. It's true that it doesn't provide much protection against some kinds of attacks, but it makes it harder to just drop something in and suddenly have a bunch of people using it without anyone ever looking at it.
IMO C/C++ is not much better, sure, no central package management system, but then people rewrite everything because it's too hard to use a dependency. Now if you do want to use one of the 1000 rewrites of a library, you'll have a lot more checking to do, and integration is still painful.
Painless package management is a good thing. Central package repositories without any checking isn't. You don't have to throw away the good because of the bad.
I have that in C++: we wrote our own in house package manager. Painless for any package that has passed our review, but since it is our manager we have enforced rules that you need to pass before you can get a new package in thus ensuring it is hard to use something that hasn't been through review.
I'm looking at rust, and that it doesn't work well with our package manager (and our rules for review) is one of the big negatives!
Note, if you want to do the above just use Conan. We wrote our package manager before Conan existed, and it isn't worth replacing, but it isn't worth maintaining our own. What is important is that you can enforce your review rules in the package manager not what the package manager is.
> Painless package management is a good thing. Central package repositories without any checking isn't.
There's a reason why these things come hand in hand, though. If the package management is so painless that everyone is creating packages, then who is going to pay for the thoroughly checked central repository? And if you can't fund a central repository, how do you get package management to be painless?
The balance that most language ecosystems seem to land on is painless package management by way of free-for-all.
>And if you can't fund a central repository, how do you get package management to be painless?
You could host your own package server with your own packages, and have the painless package manager retrieve these painlessly.
Of course we're in this situation because people want to see the painlessness with what other people built. But other people includes malicious actors every once in a while.
Correct me if I'm wrong but the usual advice in the C/C++ world is just grab the source code of any libraries you want and build them yourself (or use built-in OS libs). This is not great if you have a lot of dependencies.
Anyone else feel like AI is a trap for developers? I feel like I'm alone in the opinion it decreases competence. I guess I'm a mid-level dev (5 YOE at one company) and I tend to avoid it.
I agree. I think the game plan is to foster dependency and than hike prices. Current pricing isn't sustainable, and a whole generation of new practitioners will never learn how to mentally model software.
> I think good "science communication for the masses" is just a really hard problem.
Not only is it hard, but there are tons of examples of hoaxes that spread like wildfire. The latest I can remember were the room-temperature superconductor papers.
Yeah, I'm still disappointed by that one. I was super excited by the LK99 stuff; I don't know if that was a "hoax", but it was definitely bad science that took the media by storm.
I think that the problem is that there's effectively an infinite amount of science and it changes and updates all the time, so it's impossible to be truly "caught up" with everything, and most studies are already in pretty specific niche subjects that require a lot of understanding on that niche subject. Most people doing science communication can't possibly learn it all, and most certainly aren't equipped to call out fraud of bad science in a paper, so they have to take the papers at their word.
I mean, before I dropped my PhD, I was studying formal methods in computer science. I got reasonably good with state machine models in Isabelle, so you'd think I'd be competent with "formal methods" as a concept, but not really. If I were try and read a paper on, I don't know, "Cubic Type Theory with Agda", I would have to do a lot of catching up, almost starting from scratch, and I think I'm probably better equipped than the average software engineer for that. Even if I got to a state of more-or-less understanding it, I would certainly not be equipped to call out bad science or math or fraud or anything like that.
reply