Dive into the JAMstack and file-based CMS world. Netlify is kinda the big name here: https://www.netlify.com/ Push markdown content to git -> Netlify builds using your file-based CMS of choice -> HTML published as blog, etc.
Beware the lure of MDX/equivalents. If you want to provide structure to your content and maintain it colocated to where you use it (or imported and type checked for where you have that kind of reusability), components in Markdown is an awful experience. Inverting that and writing Markdown within posts is my current solution. It’s not as nice as a plain .md file but it lets me mix writing and structure without compromise
Realizing I should add a little more context: it’s very common to use Markdown for content and even for generating routes or subpages. But when those pages/posts/sections are expected to provide a certain data/content structure it’s generally provided by “front matter” metadata or mixed-source data like MDX which mixes JS/TS and JSX into Markdown. That experience doesn’t have any real editor safeguards, though it has some linting tools. But it has basically no dependency checks on what would be rendered. You just have to cross your fingers and hope. And if your content is expected to have any particular metadata, good luck refactoring.
I assume you mean NodeJS-based generators? Static site generators as a concept have existed since the very earliest days of the web. I worked on a team that used an in-house SSG to maintain dozens of sites years before NodeJS (or even V8) existed.
Jekyll uses Ruby, not node. I don’t know what all the others use, there are so many.
Sure, generating HTML via another language is not new, but SSG pushed back against the notion of using dynamic CMS systems (which were de rigueur in the early 2000’s) in favour of static pages hosted on serverless systems and like GitHub pages and S3, with content (mostly markdown) held in version control systems instead of a database.
Wasm is exciting because not only is it a clean, elegant instruction set that has learned a lot from the design warts of JVM and .NET, it also has formal verification as a first-class design goal. This makes it interesting for high-trust environments outside the web context as well.
How "web-specific" is WebAssembly? This all sounds like an enticing feature set a general-purpose VM for writing desktop and server applications, not just making fancy stuff in a web browser. Is there any chance we'll see something like Node-but-it's-Wasm at some point?
“...provides access to several operating-system-like features, including files and filesystems, Berkeley sockets, clocks, and random numbers, that we'll be proposing for standardization.
It's designed to be independent of browsers, so it doesn't depend on Web APIs or JS, and isn't limited by the need to be compatible with JS.”
As far as I've seen, not very web specific at all. Everything that you might fear being included is actually linked in and is not part of the VM at all (e.g. all Web APIs).
This came across HN before. [1] My conclusion then:
> Notably, they don't appear to even try to break the WA-host memory barrier, which I actually find to be a validation of the core design goal of WebAssembly: isolate the damage a vulnerable program can inflict to the memory space (and thus also output) of that program. Protect the host from the program, but not the program from itself. Also, maybe don't dump WA output you can't validate directly into DOM.
Regardless, WebAssembly just got one security group doing research against it, I expect plenty more to switch their attention into it, as WebAssemly slowly starts to gather more market share.
> Protect the host from the program, but not the program from itself.
All nice and dandy, except that the goal of WebAssembly is to extend the host, and have the host be dependant on the program behaviour.
A host unaware that the program is compromised can be led to take decisions that it wouldn't otherwise do, like allow a basic user to acquire admin credentials, that were supposedly correctly validated by the program.
I am really eager to see the amount of CVE's on WebAssembly to start poping up, then the whole WebAssembly advocates can tell the world why we had to ditch PNaCL and CrossBridge, 10 years of regress, and get the same outcome in the end.
I know very little about PNaCl, I cannot say whether wasm is better or worse, I am just defending the existence of the specific role for webassembly: lightweight sandboxing with strong sandboxing guarantees. Personally I have high expectations in this strategy.
Regarding the "Everything Old Is New Again: Binary Security of WebAssembly" they find that compiling something like photoshop to a single wasm module forfeits many security features available in x86, but this also point toward a future where application can be compiled in dozens or more small isolated modules. It remains to be seen if this will suceed or fail...
I lost count how many bytecode formats and sandboxes I have used, or read about.
WebAssembly is what the browsers now offer us, so I have to accept it, that doesn't mean I buy into the security story, specially when basic stuff like bounds checking inside the same linear memory segment is not considered as relevant.
One thing that's really great about wasm is how it enables lots of native libraries to be usable in the browser. Recently I ran into performance issues due to limitations in the WebAudio API and was able to workaround it by compiling FFmpeg to wasm and using it instead. Being able to use FFmpeg in the browser is very powerful and brings a huge suite of encoding/decoding capabilities that browsers currently lack right now (at least until WebCodecs becomes standardized and implemented).
On another side, compiling existing C/C++ libraries to wasm is still very painful. I struggled a lot using Emscripten to port FFmpeg to wasm - you have to compile EVERY dependency to wasm (so I ended up re-compiling libopus, libmp3lame, etc.) and the compiler warnings are often super cryptic which made the entire process a complete nightmare. The process really needs a lot of work still.
> I struggled a lot using Emscripten to port FFmpeg to wasm - you have to compile EVERY dependency to wasm
This is a scenario where the Bazel model is a good fit IMHO. Bazel rebuilds all of your dependencies from source. This makes it easy to compile all of your dependencies with an extra flag (eg -fsanitize=address for ASAN) or using a different compiler (eg Emscripten).
While it can be annoying to wait for the world to rebuild every time, this is a use case where it shines.
When we set this up to compile Sorbet (C++ codebase) for https://sorbet.run, it involved what I considered an inordinate amount of boilerplate and arcana.
To be fair since we set it up it’s hardly ever needed to be touched, and I could probably cargo cult this into future projects where I wanted to use it, but I wouldn’t exactly say that bazel magically makes the pain of emscripten go away.
Yikes. That definitely seems more complicated than it should be. I have not used Emscripten in this way before, I was going from my experience of passing custom flags like -fsanitize=address.
I was under the impression that Emscripten was just an alternative compiler binary, such that you could just use CC=emcc. Is that not the case?
The trick is that to provide Bazel with a custom toolchain involves way more than just setting an environment variable, because Bazel wants to control installing and making available the compiler reliably (e.g., what if `emcc` is not present on the system where Bazel was invoked? Bazel solves that problem by fetching it and building it for that system)
There are projects that provide drop-in support for custom toolchains (e.g., we use this project[0] in Sorbet to fetch and build a custom LLVM/Clang toolchain for every host we build on (rather than relying on the system toolchain). But I'm not aware of a project that has done that for Emscripten. Maybe it would be as easy as plucking out what we've done in our project into a project that others could depend on, but to quote a colleague:
> Setting up a cc toolchain in Bazel is a unique sort of pain.
Something that I've been looking for as I try to learn more about WebAssembly is good examples of tasks that, when converted from Javascript to WebAssembly, would be good uses of WebAssembly for performance or memory or other reasons. Does anybody have a link to a blog post or good documentation like that?
Determinism! [0]
I'm switching a browser game side project from Typescript to WebAssembly because determinism enables a simpler style of multiplayer. Instead of relying on a server to manage state, I can just send inputs p2p like a modern fighting game (with rollback [1]). Then the game acts just like singleplayer! For a hobbyist like me, WebAssembly makes the minefield of multiplayer so much easier to navigate.
Object enumeration order (before ES2020 [1]), math function accuracy (Firefox and Chrome did standardize on fdlibm though [2]), various timings. All of them can be fixed with caution but it might be easier to use a platform where none of them matters (well, as long as you don't import Math.sin etc. from the environment...).
In many languages like those used to compile to WebAssembly (c++, c#, rust) enumerating standard dictionaries is non-deterministic. A javascript object is a dictionary, it is not surprising that it had an undefined order. If you need order use an array, if you need to lookup a value quickly use a dictionary, if you need both then you compose some ordered dictionary structure that internally has a dictionary and an array.
I guess since so many js devs never understood this and built code around one js engines property order behavior they had to go and specify it as a standard and now all js engine must add extra overhead to track and maintain property order...
Not sure how webassembly helps object enumeration order, it doesn't have objects to enumerate! To make objects you compile to webassembly from languages that most likely have undefined dictionary order and may not even have reflection to enumerate an object at runtime.
I’m using WASM as an execution environment for game code. It’s not so much for the performance aspect, it’s more that I can run the same simulations on the client and server side and have them agree. Rather than duplicating the simulation logic, say, in JavaScript and Rust, I can write the simulation logic once in Rust and use it in both places.
It's a design tool that runs entirely in your browser as a WebAssembly-based frontend tool (coded in C++ IIRC). Go kick the tires and play around with it--you will be amazed how fast, fluid and downright native the experience feels.
IMHO I think we're going to see more and more frontend experiences like this in the near future. For certain classes of complex apps we're starting to see the overhead of all the frontend JS cruft, polyfills, reactivity, etc. are just getting out of hand and destroying browsers on low-spec phones and machines. A little Go/C++/Rust/AssemblyScript, etc. app compiled to WebAssembly interacting with the DOM directly is incredibly fast and space efficient. The build system for something like Go or Rust is so much more sane and easy to use vs. a complex modern JS Webpack setup too.
In my experience, the single biggest perk of using WebAssembly is that I can use a language I'm very productive in (Rust) compared to JavaScript. Everything else is secondary. That said, I think these projects have specific advantages by virtue of being WebAssembly:
- The backtracking search used for the crossword puzzle generator is carefully implemented to reduce memory allocations. This would be tough to do in JavaScript, and I believe it's partly responsible for its performance.
- The word game uses a compression algorithm that benefits very noticeably from wasm-opt, to the point that I can't run it without it. Given that wasm-opt takes a non-trivial amount of time at compile time, I suspect the JavaScript JIT would be slow at doing something similar at runtime. This is just conjecture, I haven't checked.
- What Aper does just wouldn't be possible without Rust features like Serde and macros.
I've experienced a 2x speedup when porting tight loops from JS to WASM. In my case, my tasks are decompression and texture decoding. I initially ported to C, and then ported to AssemblyScript, being careful to manually manage the memory along the way.
The old code was also very "tight-loop" code that's just math, and no GC allocation, so it's not applicable to many people here yet, and it's possible that JS interpreters have improved since when I ported (GC behavior has gotten quite noticeably better in V8 in the last two years), but I'll take the speedups I can get.
The YouTube series “HTTP 203” has a couple of great episodes on WebAssembly. Here’s one I like as it is both accessible and in depth: https://youtu.be/S0NQwttnr1I
In Bucket Brigade (echo.jefftk.com), I'm currently encoding and decoding Opus audio with an emscripten port of the libopus. [1] This stresses the browser pretty hard, and at some point I'd love to switch to WebAssembly.
I personally use it to simplify and synchronize encryption logic between client side and server side; makes things a breeze especially if you don’t want to mess with Javascript’s crypto APIs.
go - big bundle sizes because of gc I am guessing, idk.
c/cpp - I haven't touched c/cpp in a really really long time, but probably the other really good fit.
rust - I see rust as the successor to what I would've used c/cpp for, but not sure if rust ecosystem is still fully there.
Last time, I had some weird issues with wasm-bindgen.
AssemblyScript(AS) - Something I really like and what I am most comfortable with now that I've been doing java/c#/ts for so long now. Also it's geared towards wasm which is nice. It's my favorite solution, but I don't really get it like I get the other solutions because everything else is an "actual language" that's used for other things. I am really not sure what the strict ts stuff is about, the devil is in the details, and this is getting way too high level.
c# - Maybe good if someone is heavily invested in .net.
At the end of the day, these wasm blobs are probably for heavy tasks like functions with a lot of loops, not full systems, so any simple solution would probably work. It's why I like AS (or even cpp), but sometimes these tasks require some niche external libraries in which case rust would be probably be the best option.
The rust+wasm story is quite nice these days. I've been working on a rust+wasm SPA for a few months and I have yet to run into issues with the tooling (wasm-pack, wasm-bindgen, etc).
Fromy past experience working woth wasm-bindgen, it seems that you cannot get auto-completion for web APIs due to the fact that they are generated dynamically. Is this issue resolved now? What IDE/editor do you use to enable that?
Really off-topic, but I have recently started learning Go and also just looked into V. The simple hello world compiled binary of Go is 2088KB while that of V is 415KB. I'm on Windows. So is the large size solely due to GC?
This isn't true. If you block javascript from running, then that will include blocking WebAssembly. WebAssembly is only started through javascript, and if things ever changes so that webpages can include WebAssembly directly, presumably it will also be controlled by the same browser settings that control whether javascript is executed.
I have this running as a uBO scriptlet injected into every website before content javascript starts to execute
*##+js(myscript)
does various things, disables webassembly, websocket, AudioContext, serviceworkers, openDatabase and indexedDB, navigator.sendBeacon and window.opener, some fingerprinting randomization stuff, etc. I wish there was a Browser on the market actually letting you configure such things _per domain_ as Opera Presto used to, alas we are on our own now :(.
what i am envisioning here is the content of a webpage compiled with ads therein. it would be desirable by the googles of the world for this very reason.
This is still possible using JavaScript, and yet it is not widely seen.
You can think of WebAssembly as another mechanism for executing code in the browser in the context of a web page. JavaScript is still required to (1) load the wasm binary, (2) evaluate it and (3) connect it to DOM APIs or really anything outside of the isolated wasm execution context.
None of these scriptlets would break if a site uses WASM, they're utilities for things like adding/removing/mutating elements from the DOM, modifying localStorage, stopping alert()
executesitefunction.js could arguably be effected, but that relies on a site installing a particular function on window, which isn't required when using JS, and could be still done where the function is backed by WASM
Don't worry, won't be likely to happen - the ads that are displayed now as banners are literally bid on when it's loading, so at most the ad serving code would be bundled, but you would still be able to block the binaries they load.
Advertisers would hate that kind of model. They want perfectly targeted and customized ads for each individual viewer. Baking the ads into the content or page itself would be going back about 20 years in their world.
They've been working on adding that for a while in the standard. But it's a complex topic, which is why it is taking time. You can of course bundle your own garbage collector and statically compile it into wasm. That's why Go, C#, etc. can target WASM.
Jetbrains is actually working on a compiler backend for wasm in their new IR compiler for Kotlin. Apparently they are planning to not bundle their own GC and will try to rely on the WASM gc when that is ready. I'm not sure how close they are to having a usable release. But I expect early betas might start happening late this year or next year.
Wasm itself is at the level of assembly, not the level of JavaScript. If you want to compile a GC'd language to wasm, you have to bring your own garbage collector and compile it to wasm as well. That's not a showstopper from wasm's perspective, and several languages have done just that, but porting a GC to a new compilation target can be a bit of work.
It's not that they can't be supported, but they generally have a big runtime, so your compiled output would be big. If you're running WASM in the browser then that's a big image that you need to send to the client.
You can find the link to the source of any article at the bottom of the page.