- NaCL (as I understand it) essentially enshrined a particular version of llvm bytecode as its data format. This made it overcomplicated, and made it very difficult (technically and politically) for other browsers to implement any of it. It was difficult to compile to, and only supported chrome (which was at the time much less dominant than it is now). But webassembly is like NaCL 3.0 anyway. (2.0 was asmjs). Webassembly is NaCL's successor, not its rival.
- Flash was always a firehose of security vulnerabilities, and using it tied the future of the web to an (essentially) proprietary format put out by a single company. Flash never worked well on mobile - when it worked at all it turned your phone into a pocket heater. It didn't help that so many banner ads were distributed via flash. And given how badly flash integrated with the browser's scheduler that meant a background tab turned your computer into a space heater. Ultimately flash was killed by adobe's failure to ship a good enough product.
- Java in the browser had nearly as many security issues as flash did. And it took seconds for the jvm to start up. When it worked at all that is - java applets relied on the user to have a reasonably up-to-date version of the JVM installed (and appropriate browser extensions). Even as a developer it was a pain to get java applets working. And for what benefit? So we could have ugly non-native controls? It could have worked if browser vendors all shipped a properly sandboxed lightweight JVM like they started to do with flash. But nobody used applets anyway because they barely worked on anyone's computers.
Webassembly has taken all of the lessons from this to heart and taken 2 really important steps that no competing system has achieved:
1. Webassembly only runs in a completely sandboxed, bounds checked memory container with a tiny surface area. As a result it is orders of magnitude more secure than flash or java.
2. Webassembly got buy-in from browser vendors. Because the browsers are in charge of their respective wasm virtual machines, they're extremely well integrated with the rest of the browser. Wasm has solid JS APIs, it works everywhere including mobile, it starts up instantly (unlike java), and its tied to browsers' update mechanisms.
Yeah, politics were involved. But the outcome is much better than we would have achieved with flash or java. And I'm very grateful for the outcome. I don't want to need proprietary junk from Adobe and Oracle to make the web work.
> 1. Webassembly only runs in a completely sandboxed, bounds checked memory container with a tiny surface area.
Which is mostly sort of great in a browser context. But WASI appears to be pushing WASM for use outside the browser context, and for this to really make sense, the restruction in your #1 would need to be severely relaxed and/or substantial and large modules added to facilitate interaction with the underlying host system.
Of course some of this happened at the browser level already: web audio, web usb as the two main examples. But as that keeps happening, the "tiny surface area" feature also gets a little harder to claim.
I know this may be controversial, but I think injecting a carefully thought out sandboxing layer between application code and the rest of my system is something desktop operating systems been needing for a long time anyway.
Its crazy how vulnerable modern operating systems are in the face of malicious code. Literally any of the hundreds of programs running on my computer could steal credentials out of my browser, or read any of my files and send them out over the internet. Or, worse - encrypt all my stuff and ransom it back to me.
Why can my text editor read my browser's complete history and credential store? Why does crappy program from Corsair which maps the buttons on my mouse have access to the internet? Why does my operating system allow wacom's driver bundle to silently read, send and ultimately sell lists of what programs I'm running on my computer?
So yeah, I agree that the need for WASI is a bit unfortunate. And it'll be a lot of work to get everything working with wasi. And WASI programs will eventually, necessarily pierce WASM's beautiful hermetic shell. But I'm a big supporter of whatever roads lead us to having more control over what access to the rest of the system my applications have.
In the long run I can see WASI being part of a much more secure desktop computing environment. We need that - the current situation is ridiculous.
Your text editor isn't useful unless it can open files. You're saying there are some files it should not be able to open. Which files? Who decides? When is the decision made? Can a user override the decision? Does the not-openable-by-text-editor property propagate to copies? Etc. etc.
The main way to solve these problems at present is to remove the file abstraction entirely, so that you can't actually have "a text editor that opens files" at all. All you (or your apps) have is a persistent store in which a totally semantic approach is taken to defining default access patterns (so for example, your "contacts" are of entirely different status to "that thing you created in the little notes app last week".
A more or less isomorphic situation applies to internet access, with the major difference that in general this isn't solved on any platform yet (including browsers).
In general, the capabilities in a modern general purpose operating system represent a management problem that most regular computer users don't want to have to address, most technical computer users don't want to exist (most of the time), and operating system implementors have so far been unable to solve in a broadly acceptable way.
Whether this is justification enough for "hiding" all of this under a WASI-like abstraction that is capable of saying "there are no files" and "there is no internet" at just the right moments ... I don't know.
Yeah; that’s a hard UX problem. Obviously my code editor needs access to my source code files. I as a user should decide which files I’m editing and when. But we have plenty of good precedents for this sort of stuff with mobile apps and browser extensions. “This app wants permission to access X because (developer provided reason). Allow?”. As a user I should be able to authorise whatever I want.
The goal is that when some random package inevitably goes rogue, the result shouldn’t be a full and complete compromise of my system. My partner’s work laptop recently was sitting at 100% cpu for no reason sometimes. We traced it back to a dodgy chrome extension which was probably mining cryptocurrency. Because the malicious extension didn’t have filesystem permissions, it couldn’t do much more harm than that.
I want wasi to help us do the same thing with desktop apps. To me it which fits in the same bucket as openbsd’s pledge (and the Linux equivalents). Wasi could ideally allow fine grained control at a per library level. Eg, I pull a video encoding library into my project but when I load the library I specify which system APIs the module has access to. In this case, it has filesystem access but no internet access. So even if there’s memory bugs in the library or malicious code, it can’t connect to command & control servers or anything like that. And this is exactly what Firefox is starting to do with some of the libraries they use.
> Obviously my code editor needs access to my source code files. I as a user should decide which files I’m editing and when.
When you use emacs to open a browser-related file, are you or are you not going to be asked "Can Emacs open ~/.mozilla/.... ?" Do you want to be asked this every time? File-by-file persistent answers? How do you turn off a previous "yes" ?
> But we have plenty of good precedents for this sort of stuff with mobile apps and browser extensions.
But mobile apps/platforms can only do this because they got rid of the file abstraction. Nothing will ever ask you on iOS or Android "can XXX access the file you created last week".
iOS and Android redefined "accessible resources" using semantics more than anything else. I don't think you can pull this off on systems with a file abstraction, or per-process internet filtering, because there are too many different kinds of things on the system (largely, but not exclusively because of the variety exposed by the file abstraction).
For what it’s worth, I hate the fact mobile phones and web browsers throw away the interoperability of the filesystem.
But we have a real problem with insecure or untrustworthy code, be it npm modules, desktop apps, and whatever else. If we want the desktop to stay healthy as a computing platform, we as an industry need to find ways to solve this. I don’t want safe computing to be the sole domain of locked down phones with no filesystem.
I’ve mentioned two potential solutions - 1) we can ask the user to explicitly decide what access they want programs to have. And 2) software sandboxing of libraries via wasi, bsd’s pledge or deno‘s security model.
It sounds like you don’t like either of these solutions, but I’m all out of ideas here. How do you think we should approach this problem? I’d love to hear your thoughts.
> When you use emacs to open a browser-related file, are you or are you not going to be asked "Can Emacs open ~/.mozilla/.... ?" Do you want to be asked this every time? File-by-file persistent answers? How do you turn off a previous "yes" ?
A good example of how WInUI/UWP and macOS entitlements are going to.
Here is a fun exercise, compile a full C application with a dependency in a Heartbleed tainted version of OpenSSL into WebAssembly and then expose it to the Internet, and watch it burn exactly the same way despite whatever magic properties the sandbox offers.
Notice the full application part.
The castle walls don't matter if one can make the people inside start a fire on their own.
Webassembly and sandboxing in general isn’t a silver bullet that will solve every security problem we face. We still need our other languages and other tools, like static analysis and model checking.
I see it as a part of defence in depth. Wasm helps prevent RCE attacks and it helps control the blast radius of bad code. Rust and Go help prevent memory corruption bugs like heartbleed happening in the first place. No individual piece will be perfect - but ideally we want a world where no single bug on its own will give attackers the keys to the kingdom. At least, that’s the hope.
Fully agree, and that is what irks me with plenty of WebAssembly related content, overselling sandbox model, lack of retrospective on history of computing, and very few content on how penttesting actually goes around diffusing sandboxes.
So we get all these startups hyped into how WebAssembly is the silver bullet that will solve all security problems, which is kind of true until one reads all footnotes that come along with the contract.
Ok I hear that; but it sounds like you're enthusiastically reacting to a position I don't hold.
Who, exactly, is overselling the wasm sandbox model? What startups are claiming wasm is a silver bullet that will solve all their problems? And I believe they're out there - but just because some wasm proponents are idiots doesn't make webassembly itself without value or without merit.
There's young idealistic "true believers" who latch on to any new technology trend. I've never been smart or lucky enough to stop them. People seem to make bad cults on top of all sorts of dubious technology ideas. And I've met plenty - nodejs devs who wanted to rewrite linux in javascript. React devs who thought functional programming was invented by react, and were convinced it was going to change the world. And oh, blockchain people. So many blockchain people.
But godspeed - maybe you'll have more luck in your anti-hucksterism crusade than I.
See, you cleverly left out PNaCL and used the bounds checking argument for WebAssembly, which doesn't apply at all to data structures stored on the same linear memory segment.
As for the rest I won't even bother to dismantle yet again.
https://en.m.wikipedia.org/wiki/Google_Native_Client
https://adobe-flash.github.io/crossbridge/
https://www.usenix.org/conference/usenixsecurity20/presentat...