Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"only around 14%" kills me, I remember people buying new CPUs every year for definitely less improvement than that because of how critical it is


From that link, it also sounds like 14% is the best case that requires some help from the underlying OS:

"Two results are shown for WasmBoxC, representing two implementations of memory sandboxing. The first is explicit sandboxing, in which each memory load and store is explicitly verified to be within the sandboxed memory using an explicit check (that is, an if statement is done before each memory access). This has 42% overhead.

The OS-based implementation uses the “signal handler trick” that wasm VMs use. This technique reserves lots of memory around the valid range and relies on CPU hardware to give us a signal if an access is out of bounds (for more background see section 3.1.4 in Tan, 2017). That is fully safe and has the benefit of avoiding explicit bounds checks. It has just 14% overhead! However, it cannot be used everywhere (it needs signals and CPU memory protection, and only works on 64-bit systems).

There are more options in between those 14% and 42% figures. Explicit and OS-based sandboxing preserve wasm semantics perfectly, that is, a trap will happen exactly when a wasm VM would have trapped. If we are willing to relax that (but we may not want to call it wasm if we do) then we can use masking sandboxing instead (see section 3.1.3 in Tan, 2017), which is 100% portable like explicit sandboxing and also prevents any accesses outside of the sandbox, and is somewhat faster at 29% overhead. Other sandboxing improvements are possible too - almost no effort has gone into this yet."

It sounds like this last one is the most relevant to porting code to obscure platforms (which usually means embedded these days). 29% overhead for verifiably safe sandboxing is a good trade-off, but when you don't actually need that sandboxing, I wouldn't call that insignificant. Especially on hardware that's slow by modern standards to begin with.


Most of the obscure architectures people talk about are rather slow—after all, if performance were desired, then there would be a modern compiler available for them—and so code running on them isn't performance critical.


That conclusion does not follow. Often the slower architectures need the higher performance code where modern computers can afford to waste some. 8051s are ridiculously common, because they're stupidly cheap, but they're so under-powered that software performance is absolutely critical.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: