Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Android folks are thinking about using WASM for NDK, with compilation to native code on the PlayStore.

https://github.com/android/ndk/issues/1771



Apple already went down that road with bitcode and they abandoned it, or so I thought.

The problem is that you can't solve the problem of developers not adopting new hardware features by abstracting the hardware to a lowest common denominator, which is what WASM does and what it will always do (because the web guys have no interest in letting people write ARM or Intel only web pages, let alone NEON only web pages). You can see this problem in the writeup where the use case is posited to be SIMD, but that's only one of many possible features CPU vendors could add. What about all the others? Now instead of waiting for Android devs to adopt new features, you have to wait for WASM to get it and users to update their OS and Android devs to then adopt the new features as well. This doesn't sound faster.

So I'd guess that a better investment would be in better developer toolchains and emulators.

There are other problems with that approach:

1. How can devs measure performance if it's the Play Store that compiles the app? Upload it, download it and measure that? You never know what you're gonna get and the app may even be recompiled behind your back without you even doing a release.

2. An example of a painful transition was 32->64 bits, said to be hard due to the need for doubled up testing. Well, WASM doesn't fix that and it's not obvious how it could. It has a 32 and 64 bit variant and the need for testing both versions is driven by the way C/C++ work, not the way native code is expressed.

3. Even just abstracting SIMD isn't all that easy. The Java guys have spent years designing a SIMD abstraction that covers up the differences between AVX and NEON. C++ doesn't have any such abstraction, you literally code against intrinsics for the specific instructions. So it would only work if you assume a really awesome auto-vectorization compiler module, but the JVM guys also spent years trying that and eventually gave up. The JVM can auto-vectorize some things, but to exploit the full power of SIMD units automatically is too hard.


> Apple already went down that road with bitcode and they abandoned it, or so I thought.

Indeed, however it was mostly caused by relying on their own LLVM bitcode fork to achieve the stability that LLVM bitcode doesn't support, and eventually getting fed up of wasting development resources keeping it up to date with upstream.

As for WASM/NDK as replacement for JNI, I don't think it is a good idea, mostly due to how bad the overall NDK experience happens to be, and this won't make it better anyway.

Regarding 3, .NET does much better in this regard, with system numerics and processor specific intrisics.

Usually people tend to forget CLR was designed for C++ workloads as well, and it is reflected on its bytecodes.

WASM in general, I think its place is on the browser, anything else is just yet another take on bytecode deployments since the dawn of computing.

Maybe someone should write a P-Code to WASM compiler, to bring UCSD Pascal back into modern times, and make Pascal cool again.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: