I see you use Cargo feature for this. One thing to be aware of is Cargo's feature unification (https://doc.rust-lang.org/cargo/reference/features.html#feat...), ie. if an application embeds crate A that depends on nova_vm with all features and crate B that depends on nova_vm without any security-sensitive features like shared-array-buffer (eg. because it runs highly untrusted Javascript), then interpreters spawned by crate B will still have all features enabled.
Is there an other way crate B can tell the interpreter not to enable these features for the interpreters it spawns itself?
Nice catch, thanks for pointing that out! This also might be less than ideal if it’s the only option (rather than in addition to a runtime startup flag or a per-entrypoint/execution flag) because one could feasibly want to bundle the engine with the app with features x, y, and z enabled but only allow some scripts to execute with a subset thereof while running different scripts with a different subset.
Thank you for pointing this out, I'll have to look into this at some point.
There is currently no other way to disable features, and at least for the foreseeable future I don't plan on adding runtime flags for these things. I'm hoping to use the feature flags for optimisations (eg. no need to check for holes, getters, prototype chain in Array indexed operations if those are turned off) and I'm a bit leery of making those kinds of optimisations if the feature flags are runtime-controllable. It sounds like a possible safety hole.
For now I'll probably have to just warn (potential) users about this.
> [...] the contents of the entire memory to be read over time, explains Rüegge. “We can trigger the error repeatedly and achieve a readout speed of over 5000 bytes per second.” In the event of an attack, therefore, it is only a matter of time before the information in the entire CPU memory falls into the wrong hands.
We need software and hardware to cooperate on this. Specifically, threads from different security contexts shouldn't get assigned to the same core. If we guarantee this, the fences/flushes/other clearing of shared state can be limited to kernel calls and process lifetime events, leaving all the benefits of caching and speculative execution on the table for things actually doing heavy lifting without worrying about side channel leaks.
I get you, but devs struggle to configure nginx to serve their overflowing cauldrons of 3rd party npm modules of witches incantations. Getting them securely design and develop security labelled cgroup based micro (nano?) compute services for inferencing text of various security levels is beyond even 95% of coders. I'd posit that it would be a herculean effort even for 1% devs.
It's not a "just" if the fix cripples performance; it's a tradeoff. It is forced to hurt everything everywhere because the processor alone has no mechanism to determine when the mitigation is actually required and when it is not. It is 2025 and security is part of our world; we need to bake it right into how we think about processor/software interaction instead of attempting to bolt it on after the fact. We learned that lesson for internet facing software decades ago. It's about time we learned it here as well.
Yeah… folks who think this is just some easy to avoid thing should go look around and find the processor without branch prediction that they want to use.
On the bright side, they will get to enjoy a much better music scene, because they’ll be visiting the 90’s.
There is of course a slight chicken-egg-thing here: If there was no (dynamic) branch prediction, we (as in compilers) would emit different code that is faster for non-predicting CPUs (and presumably slower for predicting CPUs). That would mitigate a bit of that 10x.
But if the fix for this bug (how many security holes have ther been now in Intel CPUs? 10?) brings only a couple % performance loss, like most of the them so far, how can you even justify that at all? Isn't there a fundamental issue in there?
How much improvement would there still be if we weren't so lazy when it comes to writing software. If we were working to get as much performance out of the machines as possible and avoiding useless bloat instead of just counting on the hardware to be "good enough" to handle the slowness with some grace.
A modern processor pipeline is dozens of cycles deep. Without branch prediction, we would need to know the next instruction at all times before beginning to fetch it. So we couldn’t begin fetching anything until the current instruction is decoded and we know it’s not a branch or jump. Even more seriously, if it is a branch, we would need to stall the pipeline and not do anything until the instruction finishes executing and we know whether it’s taken or not (possibly dozens of cycles later, or hundreds if it depends on a memory access). Stalling for so many cycles on every branch is totally incompatible with any kind of modern performance. If you want a processor that works this way, buy a microcontroller.
But branch prediction doesn't necessarily need complicated logic. If I remember correctly (it's been 20 years since I read any papers on it), the simple heuristic "all relative branches backwards are taken, but forward and absolute branches are not" could achieve 70-80% performance of the state-of-the-art implementations back then.
I don't know guys. Yes, the direct link saves a click, but the original title was more informative for the casual reader. I'm not a professional karma farmer and in dang's shoes I would have made the same adjustment, but I can't deny that seeing the upvote rate going down by 75% after the change was a little harsh.
Installable versions of Windows apps still bundle most of the libraries like portable apps do, because Windows does not have a package manager to install them.
Apart from the Microsoft Visual C++ Runtime, there's not much in the way of third-party dependencies that you as a developer would want to pull in from there. Winget is great for installing lots of self-contained software that you as an end user want to keep up to date. But it doesn't really provide a curated ecosystem of compatible dependencies in the way that the usual Linux distribution does.
No, this is directly relevant to the comparison, especially since the original context of this discussion is about how Windows portable apps are no bigger than their locally installed counterparts.
A typical Linux package manager provides applications and libraries. It is very common for a single package install with yum/dnf, apt, pacman, etc. to pull in dozens of dependencies, many of which are shared with other applications. Whereas, a single package install on Windows through winget almost never pulls in any other packages. This is because Windows applications are almost always distributed in self-contained format; the aforementioned MSVCRT is a notable exception, though it's typically bundled as part of the installer.
So yes, Windows has a package manager, and it's great for what it does, but it's very different from a Linux package manager in practice. The distinction doesn't really matter to end users, but it does to developers, and it has a direct effect on package sizes. I don't think this situation is going to change much even as winget matures. Linux distributions carefully manage their packages, while Microsoft doesn't (and probably shouldn't).
I never said that WinGet was a drop in replacement for yum - but the parents claim that windows doesn’t have a package manager isn’t true.
There are plenty of padkages that require you to add extra sources to your package manager, that are not maintained by the distro. Docker [0] has official instructions to install via their package source. WinGet allows third party sources, so there’s no reason you can’t use it. It natively supports dependencies too. The fact that applications are packaged in a way that doesn’t utilise this for WinGet is true - but again, I was responding to the claim that windows doesn’t have a package manager.
> I never said that WinGet was a drop in replacement for yum - but the parents claim that windows doesn’t have a package manager isn’t true.
Context matters. They were talking about a type of package manager.
But even without caring about context, the sentence was not "Windows does not have a package manager". The sentence ended with "Windows does not have a package manager to install them" and "them" refers to things that winget generally does not have.
Not as understood by users of every other operating system, even macOS. It's more of an "application manager". Microsoft has a history of developing something and reusing the well-understood term to mean something completely different.
Assuming you're talking about winget, that seems to operate either as an alternative CLI interface to the MS store with a separate database developers would need to add their manifests to, or to download and run normal installers in silent mode. For example if you do winget show "adobe acrobat reader (64-bit) you can see what it will grab. It's a far cry from how most linux package managers operate
Windows 2020 - Welcome to Linux 1999 where the distro has a package manager that has just about everything most users will ever need as options to install from the web.
Er, yes they do? I guess things could be spotty if you don't have drivers (which... is true of any OS), but IME that's rare. But I have to ask because I keep hearing variations of this: What exactly is wrong with */Linux handling of multi-monitor? The worst I think I've ever had with it is having to go to the relevant settings screen and tell it how my monitors are laid out and hitting apply.
>I guess things could be spotty if you don’t have drivers
Sure, and this unfortunately isn’t uncommon.
> What exactly is wrong with */Linux handling of multi-monitor?
X11’s support for multiple monitors that have mismatched resolutions/refresh rates is… wrong. Wayland improves upon this but doesn’t support g sync with nvidia cards (even in the proprietary drivers) You might say that’s not important to you and that’s fine, but it’s a deal breaker to me.
Maybe they're using a Desktop Environment that poorly expresses support for it?
I have limited a sample size, but xrandr on the command line and GUI tools in KDE Plasma and (not as recently) LXQt (it might have been lxde) work just fine in the laptop + TV or Projector case.
> I have limited a sample size, but xrandr on the command line and GUI tools in KDE Plasma and (not as recently) LXQt (it might have been lxde) work just fine in the laptop + TV or Projector case.
I'm fond of arandr; nice GUI, but also happily saves xrandr scripts once you've done the configuration.
Every OS has it's quirks, things you might not recall as friction points because they're expected.
I haven't found any notable issues with quality hardware, possibly with some need to verify support in the case of radio transmitter devices. You'd probably have the same issue for E.G. Mac OS X.
As consumers we'd have an easier time if: 1) The main chipset and 'device type ID' had to be printed on the box. 2) Model numbers had to change in a visible way for material changes to the Bill of Materials (any components with other specifications, including different primary chipset control methods). 3) Manufacturers at least tried one flavor of Linux, without non-GPL modules (common firmware blobs are OK) and gave a pass / fail on that.
I don’t think I am spreading FUD. Hardware issues with Linux on non well trodden paths is a well known issue. X11 (still widely used on many distros) has a myriad of problems with multi monitor setups - particularly when resolutions and refresh rates don’t match.
You’re right that the manufacturers could provide better support, but they don’t.
Legally, they're allowed to modify and use GPL code internally without redistributing the source. The only mistake was publishing the source code to a public git repo without the LICENSE file, which may be a GPL violation.
I say "may", because I'm not sure if you have internal code on a public git or FTP server, is that consider "distributing"?
Currently on mobile and going from memory, but I remember having to push out quick patches for something around 2020-ish or late 2010s? The tip of my tongue says it was a use-after-free vuln in a patch to openssl, but I can't remember with confidence. I'll see if I can find it once I get home.
Worth noting lest I give the wrong impression, I don't think security is a reason to avoid Debian. For me the hacked up kernels and old packages have been much more the pain points, though I mostly stopped doing that work a few years ago. As a regular user (unless you're compiling lots of software yourself) it's a non-issue
Debian didn't "divert engineering resources" to this project. People, some of whom happen to be Debian developers, decided to work on it for their own reasons. If the Reproducible Builds effort didn't exist, it doesn't mean they would have spent more time working on other areas of Debian. Maybe even less, because the RB effort was an opportunity to find and fix other bugs.
Yes, the system is not closed and certainly people may simply not contribute to Debian at all. However, my main point is that reasonable people disagree on the relative importance of RR among other things, so it's not about "want[ing] non-reproducible builds" even if one has unlimited resources, but rather wanting RR, but not at the expense of X, where X differs from person to person.
"It's possible to disagree on whether a feature is worth doing" is technically true, but why is it worth discussing time spent by volunteers on something already done? People do all sorts of things in their free time; what's the opportunity cost there?
2FA only protects login. If you're already logged in, someone with access to the computer can just copy the session token. Or instruct the email client that is already running to dump all your emails to a local file.
I guess for computationally expensive things the only real option is to put it behind a login. I’m sure this is something SourceHut doesn’t want to do but maybe it’s their only decent option.
On SourceHut, git blame is available while logged out, but the link to blame pages is only showed to logged-in users. That may be a recent change to fight scrapers.
Precomputing git blame should take the same order of magnitude of storage as the repository itself. Same number of lines for each revision, same number of lines changed in every commit.
I see you use Cargo feature for this. One thing to be aware of is Cargo's feature unification (https://doc.rust-lang.org/cargo/reference/features.html#feat...), ie. if an application embeds crate A that depends on nova_vm with all features and crate B that depends on nova_vm without any security-sensitive features like shared-array-buffer (eg. because it runs highly untrusted Javascript), then interpreters spawned by crate B will still have all features enabled.
Is there an other way crate B can tell the interpreter not to enable these features for the interpreters it spawns itself?