> True, but I suspect it'll be a lot easier to virtualise all those APIs through WASM than it is for a regular native binary. I mean, half the point of docker is that all syscalls are routed into an LXD container with its own filesystem and network. It should be pretty easy to do the same thing in userland with a wasm runtime.
All of this sounds too good to be true. The JVM tried to use one abstraction to abstract different processor ISAs, different operating systems, and a security boundary. The security boundary failed completely. As far as I understand WASM is choosing a different approach here, good. The abstraction over operating systems was a partial failure. It succeeded good enough for many types of server applications, but it was never good enough for desktop applications and system software. The abstraction over CPU was and is a big success, I'd say.
What exactly makes you think it is easier with WASM as a CPU abstraction to do all the rest again? Even when thinking about so diverse use-cases like in-browser apps and long running servers.
A big downside of all these super powerful abstraction layer is reaction to upstream changes. What happens when Linux introduces a next generation network API that has no counterpart in Windows or in the browser. What happens if the next language runtime wants to implement low-latency GC? Azul first designed a custom CPU and later changed the Linux API for memory management to make that possible for their JVM.
All in all the track record of attempts to build the one true solution for all our problems is quite bad. Some of these attempt discovered niches in which they are a very good fit, like the JVM and others are a curiosity of history.
Nothing is intuitive on its own. Intuitiveness is a property of the relation between a thing and and some subject. Whether `map` is more intuitive than `then` depends on that subject. Without assuming a target audience, it is futile to design a library to be intuitive. Intuitive is that, for which we already have built an intuition.
One purpose of science is to provide the rest of the society, who are not scientists, with reliable insights, ideally with actionable advice on how to solve a problem. If you treat science purely as a stong-link problem, the burden of quality control lays with the consumer of science. Peer review attempts to lay it with experts. That approach is nowhere near perfect, but is the best we have. And it scales much better.
Peer review is more table stakes to the expert conversation. A non-expert is still not particularly well equipped to evaluate peer-reviewed papers and synthesize a conclusion from them, you still will need an expert to boil it down to a lay-interpretable conclusion.
Steve Bannon explains this very well. It's called "flooding the zone." Because people can only focus on 1 thing at a time, fascists do 5 things at a time. The media focuses on one thing while they make progress on the other 4. The culture war nonsense has proven to be delicious bait for the media and now so has DEI.
Using culture wars to attack foreign aid is a decades-old strategy of the GOP.
Entirely eliminating foreign aid would not meaningfully reduce the scope & power of the federal government in the U.S. (although obviously it harms our soft power abroad and there are millions of people who will be directly and indirectly impacted).
You don't have to wonder. Just observe what is going on in so-called liberal democracies of the world while identity politics is implemented by governments: censorship, redefinition of words, inventing new moral issues to shutdown inconvenient facts and such.
As someone who has been using emacs for almost 30 years. There are lot's USPs for emacs, but none should matter for the selection of a programming language. I hear lots of good thing about Calva, a Clojure plugin for VS Code. Look at that if you want to try out Clojure in the form of Jank.
My observation with the Scala and Haskell ecosystems has been very similar. One primary source of churn is changing interfaces. And very advanced static type systems offer a large design space how to model interfaces. Which leads to lots of incremental improvements. But lots of interface changes together with deep dependency trees lead to exponential explosion of changes.
Rich Hickey has been discussing this very early, but almost nobody got it when he said it, myself included. Striving for obviously simple interfaces, that avoid nominal static types like the plague, is the way to avoid this often pointless waves of changes through language ecosystem, just because someone designed a better option monad or whatever.
All of this sounds too good to be true. The JVM tried to use one abstraction to abstract different processor ISAs, different operating systems, and a security boundary. The security boundary failed completely. As far as I understand WASM is choosing a different approach here, good. The abstraction over operating systems was a partial failure. It succeeded good enough for many types of server applications, but it was never good enough for desktop applications and system software. The abstraction over CPU was and is a big success, I'd say.
What exactly makes you think it is easier with WASM as a CPU abstraction to do all the rest again? Even when thinking about so diverse use-cases like in-browser apps and long running servers.
A big downside of all these super powerful abstraction layer is reaction to upstream changes. What happens when Linux introduces a next generation network API that has no counterpart in Windows or in the browser. What happens if the next language runtime wants to implement low-latency GC? Azul first designed a custom CPU and later changed the Linux API for memory management to make that possible for their JVM.
All in all the track record of attempts to build the one true solution for all our problems is quite bad. Some of these attempt discovered niches in which they are a very good fit, like the JVM and others are a curiosity of history.