Hacker News new | past | comments | ask | show | jobs | submit | more progval's comments login

> written with a fair bit of feature flags

I see you use Cargo feature for this. One thing to be aware of is Cargo's feature unification (https://doc.rust-lang.org/cargo/reference/features.html#feat...), ie. if an application embeds crate A that depends on nova_vm with all features and crate B that depends on nova_vm without any security-sensitive features like shared-array-buffer (eg. because it runs highly untrusted Javascript), then interpreters spawned by crate B will still have all features enabled.

Is there an other way crate B can tell the interpreter not to enable these features for the interpreters it spawns itself?


Nice catch, thanks for pointing that out! This also might be less than ideal if it’s the only option (rather than in addition to a runtime startup flag or a per-entrypoint/execution flag) because one could feasibly want to bundle the engine with the app with features x, y, and z enabled but only allow some scripts to execute with a subset thereof while running different scripts with a different subset.


Thank you for pointing this out, I'll have to look into this at some point.

There is currently no other way to disable features, and at least for the foreseeable future I don't plan on adding runtime flags for these things. I'm hoping to use the feature flags for optimisations (eg. no need to check for holes, getters, prototype chain in Array indexed operations if those are turned off) and I'm a bit leery of making those kinds of optimisations if the feature flags are runtime-controllable. It sounds like a possible safety hole.

For now I'll probably have to just warn (potential) users about this.



Thanks! We've changed the URL above from the university press release (https://ethz.ch/en/news-and-events/eth-news/news/2025/05/eth...) to that first link.


Impact illustration:

> [...] the contents of the entire memory to be read over time, explains Rüegge. “We can trigger the error repeatedly and achieve a readout speed of over 5000 bytes per second.” In the event of an attack, therefore, it is only a matter of time before the information in the entire CPU memory falls into the wrong hands.


Prepare for another dive maneuver in the benchmarks department I guess.


We need software and hardware to cooperate on this. Specifically, threads from different security contexts shouldn't get assigned to the same core. If we guarantee this, the fences/flushes/other clearing of shared state can be limited to kernel calls and process lifetime events, leaving all the benefits of caching and speculative execution on the table for things actually doing heavy lifting without worrying about side channel leaks.


I get you, but devs struggle to configure nginx to serve their overflowing cauldrons of 3rd party npm modules of witches incantations. Getting them securely design and develop security labelled cgroup based micro (nano?) compute services for inferencing text of various security levels is beyond even 95% of coders. I'd posit that it would be a herculean effort even for 1% devs.

Just fix the processors?


It's not a "just" if the fix cripples performance; it's a tradeoff. It is forced to hurt everything everywhere because the processor alone has no mechanism to determine when the mitigation is actually required and when it is not. It is 2025 and security is part of our world; we need to bake it right into how we think about processor/software interaction instead of attempting to bolt it on after the fact. We learned that lesson for internet facing software decades ago. It's about time we learned it here as well.


Is the juice worth the squeeze? Not everything needs Orange Book (DoD 5200.28-STD) Class B1 systems.


how will this prevent JavaScript from leaking my password manager database?


And if not, why did they introduce severe bugs for a tiny performance improvement?


It's not tiny. Speculative execution usually makes code run 10-50% faster, depending on how many branches there are


Yeah… folks who think this is just some easy to avoid thing should go look around and find the processor without branch prediction that they want to use.

On the bright side, they will get to enjoy a much better music scene, because they’ll be visiting the 90’s.


> Does Branch Privilege Injection affect non-Intel CPUs?

> No. Our analysis has not found any issues on the evaluated AMD and ARM systems.


IBM Stretch had branch prediction. Pentium in the early 1990s had it. It's a huge win with any pipelining.


That's a vast underestimate. Putting in lfence before every branch is on the order of 10X slowdown.


There is of course a slight chicken-egg-thing here: If there was no (dynamic) branch prediction, we (as in compilers) would emit different code that is faster for non-predicting CPUs (and presumably slower for predicting CPUs). That would mitigate a bit of that 10x.


A bit. I think we've shown time and time again that letting the compiler do what the CPU is doing doesn't work out, most recently with Itanium.


The issue is with indirect branches. Most branches are direct ones.


Of course I know that.

But if the fix for this bug (how many security holes have ther been now in Intel CPUs? 10?) brings only a couple % performance loss, like most of the them so far, how can you even justify that at all? Isn't there a fundamental issue in there?


How much improvement would there still be if we weren't so lazy when it comes to writing software. If we were working to get as much performance out of the machines as possible and avoiding useless bloat instead of just counting on the hardware to be "good enough" to handle the slowness with some grace.


A modern processor pipeline is dozens of cycles deep. Without branch prediction, we would need to know the next instruction at all times before beginning to fetch it. So we couldn’t begin fetching anything until the current instruction is decoded and we know it’s not a branch or jump. Even more seriously, if it is a branch, we would need to stall the pipeline and not do anything until the instruction finishes executing and we know whether it’s taken or not (possibly dozens of cycles later, or hundreds if it depends on a memory access). Stalling for so many cycles on every branch is totally incompatible with any kind of modern performance. If you want a processor that works this way, buy a microcontroller.


But branch prediction doesn't necessarily need complicated logic. If I remember correctly (it's been 20 years since I read any papers on it), the simple heuristic "all relative branches backwards are taken, but forward and absolute branches are not" could achieve 70-80% performance of the state-of-the-art implementations back then.


Do you mean overall or localized to branch prediction? Assuming all of that is true, you're talking about a 20-30% performance hit?


> If you want a processor that works this way, buy a microcontroller.

The ARM Cortex-R5F and Cortex-M7, to name a few, have branch predictors as well, for what it’s worth ;)


You can still have a static branch predictor. That has surprisingly good coverage. I'm not saying this is a great idea, just pointing it out.


Thanks! It would be great if someone could update the title URL to that blog post; the press release is worse than useless.



I don't know guys. Yes, the direct link saves a click, but the original title was more informative for the casual reader. I'm not a professional karma farmer and in dang's shoes I would have made the same adjustment, but I can't deny that seeing the upvote rate going down by 75% after the change was a little harsh.


It was on the frontpage for 23 hours (and still is!) so the submission still did unusually well.

I thought about adding the blog post link to the top text (a bit like in this thread: https://news.ycombinator.com/item?id=43936992), but https://news.ycombinator.com/item?id=43974971 was the top comment for most of the day, and that seemed sufficient.

Edit: might as well belatedly do that!


Thanks for all your hard work as always, dang.


Thanks!


Installable versions of Windows apps still bundle most of the libraries like portable apps do, because Windows does not have a package manager to install them.


Windows does have a package manager and has for the last 5 years.


Apart from the Microsoft Visual C++ Runtime, there's not much in the way of third-party dependencies that you as a developer would want to pull in from there. Winget is great for installing lots of self-contained software that you as an end user want to keep up to date. But it doesn't really provide a curated ecosystem of compatible dependencies in the way that the usual Linux distribution does.


Ok but that’s a different argument to “windows doesn’t have a package manager”


No, this is directly relevant to the comparison, especially since the original context of this discussion is about how Windows portable apps are no bigger than their locally installed counterparts.

A typical Linux package manager provides applications and libraries. It is very common for a single package install with yum/dnf, apt, pacman, etc. to pull in dozens of dependencies, many of which are shared with other applications. Whereas, a single package install on Windows through winget almost never pulls in any other packages. This is because Windows applications are almost always distributed in self-contained format; the aforementioned MSVCRT is a notable exception, though it's typically bundled as part of the installer.

So yes, Windows has a package manager, and it's great for what it does, but it's very different from a Linux package manager in practice. The distinction doesn't really matter to end users, but it does to developers, and it has a direct effect on package sizes. I don't think this situation is going to change much even as winget matures. Linux distributions carefully manage their packages, while Microsoft doesn't (and probably shouldn't).


I never said that WinGet was a drop in replacement for yum - but the parents claim that windows doesn’t have a package manager isn’t true.

There are plenty of padkages that require you to add extra sources to your package manager, that are not maintained by the distro. Docker [0] has official instructions to install via their package source. WinGet allows third party sources, so there’s no reason you can’t use it. It natively supports dependencies too. The fact that applications are packaged in a way that doesn’t utilise this for WinGet is true - but again, I was responding to the claim that windows doesn’t have a package manager.

[0] https://docs.docker.com/engine/install/fedora/#install-using...


> I never said that WinGet was a drop in replacement for yum - but the parents claim that windows doesn’t have a package manager isn’t true.

Context matters. They were talking about a type of package manager.

But even without caring about context, the sentence was not "Windows does not have a package manager". The sentence ended with "Windows does not have a package manager to install them" and "them" refers to things that winget generally does not have.


Not as understood by users of every other operating system, even macOS. It's more of an "application manager". Microsoft has a history of developing something and reusing the well-understood term to mean something completely different.


Assuming you're talking about winget, that seems to operate either as an alternative CLI interface to the MS store with a separate database developers would need to add their manifests to, or to download and run normal installers in silent mode. For example if you do winget show "adobe acrobat reader (64-bit) you can see what it will grab. It's a far cry from how most linux package managers operate


Windows 2020 - Welcome to Linux 1999 where the distro has a package manager that has just about everything most users will ever need as options to install from the web.


I can say the same thing about Linux - it’s 2025 and multi monitor, Bluetooth and WiFi support still doesn’t work.


Er, yes they do? I guess things could be spotty if you don't have drivers (which... is true of any OS), but IME that's rare. But I have to ask because I keep hearing variations of this: What exactly is wrong with */Linux handling of multi-monitor? The worst I think I've ever had with it is having to go to the relevant settings screen and tell it how my monitors are laid out and hitting apply.


>I guess things could be spotty if you don’t have drivers

Sure, and this unfortunately isn’t uncommon.

> What exactly is wrong with */Linux handling of multi-monitor?

X11’s support for multiple monitors that have mismatched resolutions/refresh rates is… wrong. Wayland improves upon this but doesn’t support g sync with nvidia cards (even in the proprietary drivers) You might say that’s not important to you and that’s fine, but it’s a deal breaker to me.


Maybe they're using a Desktop Environment that poorly expresses support for it?

I have limited a sample size, but xrandr on the command line and GUI tools in KDE Plasma and (not as recently) LXQt (it might have been lxde) work just fine in the laptop + TV or Projector case.


> I have limited a sample size, but xrandr on the command line and GUI tools in KDE Plasma and (not as recently) LXQt (it might have been lxde) work just fine in the laptop + TV or Projector case.

I'm fond of arandr; nice GUI, but also happily saves xrandr scripts once you've done the configuration.


The only thing you can say in the context of the few bleeding edge hardware that isn't supported by Linux is that:

1. The hardware vendors are still not providing support the way they do for windows.

2. The Linux Devs haven't managed to adapt to these new hardwares.


FUD (Fear Uncertainty Doubt).

Every OS has it's quirks, things you might not recall as friction points because they're expected.

I haven't found any notable issues with quality hardware, possibly with some need to verify support in the case of radio transmitter devices. You'd probably have the same issue for E.G. Mac OS X.

As consumers we'd have an easier time if: 1) The main chipset and 'device type ID' had to be printed on the box. 2) Model numbers had to change in a visible way for material changes to the Bill of Materials (any components with other specifications, including different primary chipset control methods). 3) Manufacturers at least tried one flavor of Linux, without non-GPL modules (common firmware blobs are OK) and gave a pass / fail on that.


I don’t think I am spreading FUD. Hardware issues with Linux on non well trodden paths is a well known issue. X11 (still widely used on many distros) has a myriad of problems with multi monitor setups - particularly when resolutions and refresh rates don’t match.

You’re right that the manufacturers could provide better support, but they don’t.


Unfortunately a lot of Windows devs are targeting 10 year old versions.


Source?


As it wasn't widely implemented, and few people turned it on, Safari removed it in 12.1 as a potential fingerprinting variable: https://developer.apple.com/documentation/safari-release-not...

I think I remember a larger article about this, but can't find it now



Legally, they're allowed to modify and use GPL code internally without redistributing the source. The only mistake was publishing the source code to a public git repo without the LICENSE file, which may be a GPL violation.

I say "may", because I'm not sure if you have internal code on a public git or FTP server, is that consider "distributing"?


> publishing the source code to a public git repo without the LICENSE file, which may be a GPL violation.

Great. You can get a federal judge to sign on that.

Maybe they can be ordered to facilitate some kid of resolution.

I'm sure they are trembling as I write.


The toolchain (eg. compiler) reads the time from an environment variable if present, instead of the actual time. https://reproducible-builds.org/docs/source-date-epoch/


> There have been security vulnerabilities for example that only existed in the Debian-based package of software.

Any examples more recent than CVE-2008-0166?


Currently on mobile and going from memory, but I remember having to push out quick patches for something around 2020-ish or late 2010s? The tip of my tongue says it was a use-after-free vuln in a patch to openssl, but I can't remember with confidence. I'll see if I can find it once I get home.

Worth noting lest I give the wrong impression, I don't think security is a reason to avoid Debian. For me the hacked up kernels and old packages have been much more the pain points, though I mostly stopped doing that work a few years ago. As a regular user (unless you're compiling lots of software yourself) it's a non-issue


In general, most responsibly reported CVE allow several weeks for the patch fixes to propagate into the ecosystems before public disclosure.

Once an OS is no longer actively supported, it will begin to accumulate known problems if the attack surface is large.

Thus, a legacy complex-monolith or Desktop host often rots quicker than a bag of avocados. =3


Debian didn't "divert engineering resources" to this project. People, some of whom happen to be Debian developers, decided to work on it for their own reasons. If the Reproducible Builds effort didn't exist, it doesn't mean they would have spent more time working on other areas of Debian. Maybe even less, because the RB effort was an opportunity to find and fix other bugs.


Yes, the system is not closed and certainly people may simply not contribute to Debian at all. However, my main point is that reasonable people disagree on the relative importance of RR among other things, so it's not about "want[ing] non-reproducible builds" even if one has unlimited resources, but rather wanting RR, but not at the expense of X, where X differs from person to person.


"It's possible to disagree on whether a feature is worth doing" is technically true, but why is it worth discussing time spent by volunteers on something already done? People do all sorts of things in their free time; what's the opportunity cost there?


2FA only protects login. If you're already logged in, someone with access to the computer can just copy the session token. Or instruct the email client that is already running to dump all your emails to a local file.


git blame is always expansive to compute; and precomputing (or caching) it for every revision of every file is going to consume a lot of storage.


I guess for computationally expensive things the only real option is to put it behind a login. I’m sure this is something SourceHut doesn’t want to do but maybe it’s their only decent option.


On SourceHut, git blame is available while logged out, but the link to blame pages is only showed to logged-in users. That may be a recent change to fight scrapers.


Precomputing git blame should take the same order of magnitude of storage as the repository itself. Same number of lines for each revision, same number of lines changed in every commit.


Should be easy to write a script that takes a branch and constructs a mirror branch with git blame output. Then compare storage space used.


It is more fun to fight LLMs rather than trying to create magical unicorn caches that work for every endpoint.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: