As things go along, I’m more and more mystified by WASM’s apparent design. It feels like 1996 Java for applets, but without the built-in GC, stdlib, or even the most basic hooks into the browser. Which means it’s basically useless for what I assume its goal is, of letting you use languages other than JavaScript to code a web page. Without trivial DOM access, what’s the point?
Other proposed usages like for FaaS-style edge widget computing honestly don’t make a lot of sense. Why target an artificial VM for that purpose instead of existing architectures that work just fine with less overhead.
For compute that could happen in either the browser or the edge, maybe, but how much is that level of portability really realistic or worthwhile? I’m guessing it might be in a few narrow cases, but it’s not going to be a common pattern.
In a nutshell, Wasm is essentially a CPU with a tiny instruction set. It's very primitive and minimal, but I think that was the point. If you need to do something with numbers in a web app, it's pretty neat. If you need to work with strings, you're going to end up crying in the fetal position under your desk.
The current iteration of WebAssembly was always an "MVP." It's got the core instruction set and memory model to run, essentially, C programs safely and efficiently, and just enough interop with the host to get data in and out.
But it was always the plan to expand on that, and make a wider set of use cases easier and more efficient. Working directly with the DOM really requires some amount of integration with the GC that manages the DOM, for example.
The thing that makes this interesting outside the browser is the security model. Unlike typical environments used for FaaS or whatever else, it's capability based and starts from literally zero- everything a WebAssembly module can touch has to be passed in explicitly when it's instantiated. That's a lot narrower, lighter weight, and more flexible than things like containers.
But I thought WASM was to replace .NET and JAVA VM's. But, without a VM, to be more a pass through to the underlying CPU. So much faster. So compile down to a 'byte code' like thing, that does not run on a VM, but runs on the underlying HW.
> So compile down to a 'byte code' like thing, that does not run on a VM, but runs on the underlying HW.
I think you may be confusing "system" virtual machines and "process" virtual machines. The VM here is the thing executing the WASM bytecode; it works the same was as .NET and JAVA VMs.
I understand it is a VM like Java. I was just under the impression that it was somehow better, more streamlined, that would offer enough performance improvement that you could start treating it like running a 'native' local app. Like if I build a 'native' app, a 'thick client', I could now run it on WASM in a browser. Thus not need any local installs, but have same performance.
I've seen some apps doing that. But guess it isn't considered 'the way' for the future?
> But I thought WASM was to replace .NET and JAVA VM's.
It's name is literally web assembly. It's goal was never and still isn't to replace those VMs. It was literally an idea to create a faster code sandbox for the web based on the idea's from Mozilla's asm.js
> to be more a pass through to the underlying CPU. So much faster. So compile down to a 'byte code' like thing, that does not run on a VM, but runs on the underlying HW.
wat.
> So goal was speed. And to allow other languages to compile to it.
Yes, it was. Has nothing to do with the fantasy of replacing JVM and .Net VM or running directly on hardware.
Of course, that is where it started.
I'm probably being loose in terminology.
I assumed that to gain this speed, that it was a little closer to the metal than a VM like Java. That it must have some kind of pass through to allow commands to run on the local HW, not just emulated in a VM. So like a VM in that you can compile to it, but it would execute natively.
From the FAQ.
"WebAssembly aims to execute at native speed by taking advantage of common hardware capabilities available on a wide range of platforms. It is a low-level assembly-like language with a compact binary format that runs with near-native performance"
And when looking at the Use Cases, it seems to be trying to do a lot more than javascript.
Other proposed usages like for FaaS-style edge widget computing honestly don’t make a lot of sense. Why target an artificial VM for that purpose instead of existing architectures that work just fine with less overhead.
For compute that could happen in either the browser or the edge, maybe, but how much is that level of portability really realistic or worthwhile? I’m guessing it might be in a few narrow cases, but it’s not going to be a common pattern.