Aren't these just IPCs disguised as normal function calls though? IIRC only the main node process does anything node, renderers can call "node functions" that really happen in the main process.
Not at all, in a renderer the Node and Chromium event loops are bound together, they’re part of the same v8 isolate, no IPC shenanigans.
The main process really shouldn’t be used for anything except setup. Since it controls gpu paints amongst other things, blocking on it will cause visible stuttering and a bad user experience.
Separately the IPC lets you do zero copy in some circumstances via Transferable objects such as ArrayBuffers. Structured cloning is efficient but not zero copy, and json serialisation shouldn’t be used (since structured cloning is easily available).
Thanks for adding this context! Guess I was mislead by the Electron documentation talking about multiple processes and IPC, appreciate the clarification!
Within a renderer you can access NodeJS APIs directly. The main process shouldn’t be used for any significant computation, as it will block GPU paints and cross-process synchronisation.
The other main difference is Electron bundles a known set of APIs, given the known Chromium version. There’s such a huge variance of supported features across the embedded web views.
Yes, this is the best benefit of elecrron: you dont have to trouble shoot 10s of OS webview versions and their ixremental suppory, especially with MacOS.
But it is right that the ui for elwctron has to use a IPC layer to get a node backend running. However, chrome is moving a lot of things like FilesystemAPI into browsers so there may be a day were nodejs is dropped in favor of a sandboxed chromium.
You don’t need IPC, you can either use a preload script to expose particular Node APIs in a secure manner or set ‘nodeIntegration‘ to ‘true’ to expose everything.
Conversely, the last blog post we wrote was 8,000+ words and took months of testing, yet the average 'read' time is under 2 minutes. I'm convinced there's a correlation between interested technical users and the blocking of analytics scripts - but if I were to naively look at the data, I'd also come to the conclusion that "lower effort" was better return on investment. I wonder if these tech journalism establishments are following their analytics and A/B testing themselves into oblivion.
It's like meat and potatoes, though. Yes you can fill a website up with low effort filler content that keeps your viewers engaged and visiting, but in the long run you also need some solid meaty stuff.
A lot of that sorta stuff moved over to youtube because it was easier to monetize. I think a hybrid of the two is the nicest (reading charts from youtube videos sucks)
It's a weird trap. With no analytics, it'd be difficult to attribute any conversions to a particular user type, so I'd wager that, if the hypothesis that lower tech users don't block ads/analytics holds up, the metrics skew that way. We can't make any realistic assertions without the data for that user group. Shrug.
We nerd sniped ourselves into testing the latencies of a whole bunch of wireless communication links and protocols for microcontrollers.
We’ll probably do a series of power consumption / range tests later on, let me know if there are any setups in particular that you’d be interested in seeing test cases for.
Raw data, firmware and post processing scripts are here on GitHub:
I nearly did, but the write-up was getting pretty long.
I'll try to find something for the planned range/interference tests. Morse Micro is also an Australian company so I'll probably look into their parts first unless there's any recommendation?
That sounds good if you're an Aussie. I'm honestly confused why there's still so few options, but I guess most radically new standards got a comparatively slow start.
I work on a product for building user interfaces for hardware devices. All the state management is done via incrementally updated, differential DataFlow systems. The interface is defined in code instead of graphically, but I think that's a feature, so that code can be version controlled.
I think there has been evolution in the underlying data computation side of things, but there are still unsolved questions about 'visibility' of graphical node based approaches. A node based editor is easy to write with, hard to read with.
Our product, Electric UI, is a series of tools for building user interfaces for hardware devices on desktop. It has a DataFlow streaming computation engine for data processing which leans heavily on TypeScript's generics. It's pretty awesome to be able to have examples in our docs that correctly show the types as they flow through the system. I certainly learn tools faster when they have good autocomplete in the IDE. Twoslash helps bring part of that experience earlier in the development process, right to when you're looking at documentation.
Our site is built with GatsbyJS, the docs are a series of MDX files rendered statically, then served via Cloudflare Pages. We use the remark plugins to statically render the syntax highlighting and hover tag information, then some client-side React to display the right tooltips on hover.
We build a Twoslash environment from a tagged commit of our built TypeScript definitions, from the perspective of our default template. The Twoslash snippets as a result have all the required context built in, given they are actual compiled pieces of code. The imports we display in the docs are the actual imports used when compiling from the perspective of a user. It bothers me when docs only give you some snippet of a deeply nested structure, and you don't know where to put it. Even worse when it's untyped JS! Using Twoslash lets us avoid that kind of thing systematically.
The CI system throws errors when our docs snippets "don't compile", which is very helpful in keeping our docs up to date with our product. Nothing worse than incorrect docs!
We use React components extensively, and I'm not really happy with our prop reference tables which use Palintir's Documentalist. Our components are increasingly using complex TypeScript generics to represent behaviour. The benefits in the IDE are huge, but the relatively dumb display of type information in the prop table leaves something to be desired. I'm most likely going to replace the data for those tables with compile-time generated Twoslash queries.
My only complaints have been around absolute speed of compilation, but I haven't dug deep into improving that. I just set up a per snippet caching layer, and once files are cached, individual changes are refreshed quickly. After all, it's invoking the full TypeScript compiler, and that's its biggest feature.
Overall I've been very happy with Twoslash, and I'm looking forward to refactoring to use this successor and Shikiji (the ESM successor to Shiki), hopefully it improves our performance. The new Twoslash docs are look great, a huge improvement on when we started using it.