I have been in the embedded domain for many years. I really like the approach taken by the team here. Particularly since this team is really well-versed in developing virtual machines and programming languages. But I am very much biased in their favor.
Embedded development is tricky, and anything that can help improve the productivity is useful.
The problem is that there have been many, many attempts at doing this exact same thing over the years. And they all have to fight the same fight: to get some serious traction for their particular approach.
Back in the 00's, Sentilla did a Java VM for the MSP430. This got a lot of traction - they even
had James Gosling himself vouch for the project.
More recently, Electric Imp and Particle.io have taken similar approaches, but with other languages.
Micropython and a variety of mini-JS engines are also around. Espruino, Tessel.io, and many more.
In a technical sense, running a virtual machine on a microcontroller is a really good way to go. You remove a lot of low-level friction that you otherwise have to deal with. And you get a lot of things almost for free, such as Over-The-Air (OTA) updates, and more control over security.
But when you get down to actually doing the work, you frequently end up wanting to have all that low-level access, even if you have to endure a bit of pain. Sometimes you don't even want to have an operating system with a hardware abstraction layer. You kinda want the bare metal.
So Toit are in a tricky situation. They may have a really good product. But they will have to get some serious traction. Developing a rock solid programming language and virtual machine is hard. Really hard. But it is way easier than to get that sweet, sweet traction.
It is the same problem all over again. embedded is tricky because you want to do too much with too little.
Then every time there's advancements in what you have available, people add all the niceties back. And at this point it is not embedded anymore. It's just a low power full device. Just like java for the MSP430 (like anyone would spend $7.99 per chip for a TV remote with 2week batteries, instead of $0.004 and 2+month)
True embedded development usually doesn't even have room for a watchdog timer. let alone virtualization and memory protection.
This is not new. This is not embedded. This is just the usual cycle of hardware generation changing and people in academia ivory tower (or worse, google solid gold tower) having nicer toys than everyone else.
Well put. This is a natural, expected and in many ways welcomed development in the cycle, but it’s not so much ‘embedded’ (in the sense you and I think of it) as ‘processors in this range are now computers, not microcontrollers.’
An ESP32 (and oh man they’re great) is more powerful than a mid-late 90s desktop, but power isn’t what separates a computer from a microcontroller. It’s the abstraction compared to the bare metal control.
(To be clear, the project in the post is very much embedded, but programs running on a vm written in a high level interpreted(ish) language are not so much.)
embedded is a term that means your code is closer to the discrete component analogy as possible, with all the same design and testing expected from the electrical design.
If you want to add modern language niceties to it, fine, But I would say that anything that brings in the "app" concept, and specially installation and fleet management, the better term is microcontrollers.
> True embedded development usually doesn't even have room for a watchdog timer.
A watchdog timer is often a piece of functionality internal to an MCU. https://microchipdeveloper.com/8bit:wdt The majority of embedded development does use watchdog timers, at many levels.
It seems there are some conflicting ideas about what "embedded" means. For the conventional definition, if it's in a single function box that doesn't look like a computer, it's embedded. Even 20-30 years ago your laser printer might have had more CPU and memory than the computer connected to it, and it was considered embedded. ATMs are embedded too. MS sells "embedded" versions of Windows.
Maybe the industry would benefit from some more fine grained terminology here, perhaps call the small stuff MCU embedded?
You are correct: I think the definition has been fixed in time even as the underlying meaning changed: I've programmed "embedded systems" that were based on x64-class processors.
But that's not the important part. I think one major problem is that the term "embedded" puts a lot of engineers into a frame of mind that rejects programming in anything high-level. We have done lots of projects where something like MicroPython would have been perfect, but every time I try to sell something like that, I get a bunch of excuses that really aren't grounded in any concrete objection and we go right back to twiddling bits in C.
Usually that is the same crowd that even refuses C99, or any other compiled language in alternative to C, even though there are vendors still in business selling such compilers.
I'm hoping someone brings WASM to MCUs with a HAL interface like WASI does for OS platform functionality. As far as I can tell, the byte code is simple and easy to write an interpreter for, with a lot of practically useful features like static memory blocks declared ahead of time that make it conducive to MCUs. It should even be possible to write test harnesses with simulators for each architecture that gives a precise prediction for how many cycles a given block of WASM will take in an interpreter so that timing for interrupt handlers, tasks, etc can all be worked out in simulation without even touching the hardware.
If a given block of WASM isn't performant enough in the interpreter, then drop it to compiled code and expose it to the rest of the WASM - at least then only a small fraction of your code requires a painful firmware upgrade, the rest can be updated in a user defined section of ROM. It would be great to have a compiler/linker combo that could use annotations to configure which parts of the firmware should be compiled and which should be interpreted. With inlining or even just AST substitution (replace global vars with register access), it should provide all of the low level access necessary while still allowing for high level Arduiono style libraries
Having messed with some WASM code, I would say that the byte code is NOT easy to write an interpreter for due to the (IMO insane) way blocks work. Blocks are nested but the start/end tags need to be matched up in order to determine where control flow goes - unlike jump instructions, the block tags don’t come with offsets built in. This means a fully-fledged interpreter needs to maintain a stack of blocks and (for efficiency) pre-scan each function to identify the block boundaries. That’s going to hurt performance a lot and take up valuable memory on a microcontroller.
A modified WASM with explicit jumps and less “magic” behavior would be great. Perhaps a transpiler could generate such a “interpreter-friendly” pseudo-WASM to be loaded on the microcontroller.
(FWIW, even though WASM supports static data blocks, it also has provisions for dynamic allocation, and most WASM programs over-allocate their static blocks to make room for a C stack. So, in practice, making WASM work for microcontrollers and static data would also mean new languages or compilers that are much more conservative with memory use.)
Having written an interpreter for Wasm (Wasm3), you're right, it's not an ideal format for direct interpretation. Wasm3 does a transcoding pass first to optimize for speed, but we still have it running on some pretty constrained devices.
The blocks-with-return-values WASM uses are quite conducive to static analysis, so it's a rather sensible choice for a general purpose VM. It's essentially a different form of SSA which dispenses with the unintuitive phi-function syntax. Obviously, for actual execution, one might want a lightweight pre-processing step introducing actual jump targets.
One problem with embedded is how much variation there is across the board.
Articles like this hide this complexity by talking about "microcontrollers", but behind that innocent facade hide utterly different beasts like the Atmega (of Arduino fame), MSP430, PIC, SH4, and anything Cortex-M3-based, just to cite a few.
It's not just about the generic memory and compute constraints of the genre as a whole, but adapting to the specifics of each platform. Solved the platform/tooling for the ESP32? Great, but that doesn't help my STM32-based project.
To make a crude analogy: the familiar "PC" architectures are pretty consolidated (x86, x86-64, or arm32(Thumb), arm64), but they're like the primate clade compared to the microcontroller "animals" that range from the carpenter bee through the sponge.
Correct. Yet, it 'would be nice' if there were some toolchain that ran on all of those. Arduino environment is the closest thing today, I think. For what it is, it's not bad.
Kaspar Lund of V8 and Dart fame and Toit.io have brought an IOT platform to market based around a purpose designed dynamic language with a bytecode VM optimized for use in MCUs. That VM engine is hosted in FreeRTOS which together with the VM and a deployment API provide a reliable and crash resistant platform for deploying apps to your IOT fleet. In their words "deploy a long-lived Toit application on your device, as if it was a mobile app being installed on your smart phone." (https://docs.toit.io)
The server end of their platform provides the management and app deployment infrastructure. It's claimed that the RTOS and VM environment makes deployment reliable even in the presence of failing apps. And that one app on a IOT device will not prevent other apps from running compromise management or brick the device.
Toit.io, the company makes $$ by charging you .10/MB after the first 100MB/mo for use of their management and app deployment and you are left free to choose your own data platform. What I don't have feel for is what sort of utilization per device I can expect for the management and instrumentation traffic.
*Edit I dug around the website and the tooling including the language looks like it is mostly closed source.
It's a really great concept, and the embedded world is crying out for more solutions like this.
However, I'd be very, very hesitant to develop anything significant on this platform. At the very least I'd want to have a. the tooling to run unit tests / simulations of my programs on the PC before deploying and b. a clear migration path for if they went bust.
Their value add is the infrastructure around deployment, so why is their language closed source?
I really like the Balena.io approach - the whole platform is open, so if they do go bust you could run it yourself, but they provide real value in running the infrastructure - so it makes loads of sense to pay them.
The article is not really about V8 not fitting on microcontrollers, but aside from that:
> In a nutshell, the problem is that on microcontrollers everything is firmware that is compiled, linked, and deployed together using really old-fashioned tools. Changing anything means changing everything.
The author hasn't seemingly heard of http://www.ulisp.com/ which is small enough to fit within kilobytes of memory and at the same time featureful enough to drive complex microcontroller applications. At the same time, it has a normal REPL running on the microcontroller that is accessible from the outside and usable for Lisp-style interactive programming.
There is also Micropython and there are a couple of javascriptish options as well. These are all leaky abstractions, none of these languages actually works exactly the same as the versions that run on full OSs with memory protection. As far as I know, none of it is actually used in industry and the workflow the author describes is the standard practice that is followed pretty much everywhere. Of course, Toit will also face an uphill battle to get any adoption in this market. I think they made a mistake using their own language instead of pretending it is Python or Javascript, even if this lets them shed a lot of baggage. Their platform is interesting though and does I think address some real challenges people would face when trying to get in-house IOT-type work done on ESP32.
If the scope is limited to ESP32/ESP8266, it seems technically possible to run QuickJS on them.
From Fabrice Bellard himself:
> QuickJS should be able to run on the ESP32 platform as it is OS independent (as you said, quickjs-libc.c is not part of the engine). For simple scripts it should fit in the available RAM.
One can do more stuff with Tasmota and Berry language (which also looks like Python only with useful end statements) or Arduino, NodeMCU, MicroPython than what this platform offers at this time. I2C and SPI drivers, everybody has that. If one wants to do serious stuff they would use an appropriate RTOS and program it in C. MongooseOS does more than this if we're talking ESP32, also other devices, Javascript, C, C++, commercial support, cloud based OTA upgrades and integration with AWS, Azure, Google and IBM Watson IoT cloud services.
Not quite sure if I follow what you're saying. As in Tasmota/Berry do or do not do more than provide I2C/SPI?
> If one wants to do serious stuff they would use an appropriate RTOS and program it in C.
It's unfortunate, but still largely appears to be the case. I find C very time consuming to program, so I ported Nim to FreeRTOS [1]. It's _very_ nice being able to go from writing highly optimized ISR functions to high level JSON parsing in one language. Add in defaulting to memory safety but with no pause-the-world GC. I tried Rust but it seems more difficult to integrate into existing world RTOS'es, flashers, Swagger debuggers, etc.
Though, I've been curious what running a WASM VM would be like? One could integrate any language: C++, C, Nim, Rust, etc. Would be interesting.
> MongooseOS does more than this if we're talking ESP32, also other devices, Javascript, C, C++, commercial support, cloud based OTA upgrades and integration with AWS, Azure, Google and IBM Watson IoT cloud services.
MongooseOS does seem interesting, but very targeting a niche market with prebuilt needs? For future RTOS'es I think ZephyrOS [2] has a lot of potential given it's now supported by NXP [3], TI, and others but is independent of any given (cloud) vendors or other IoT companies. Some might not like the CMake based build system, but in my view all the RTOS build systems are terrible in their own special way.
Tasmota has drivers for almost anything you'd imagine. Adding I2C or OW sensors for instance is mostly plug and play. If you wired them correctly and used tge appropriate image they would just work. Berry is an addon scripting kanguage that runs on a VM. It's only present in the ESP32 dev branch.
Zephyr is nice and well documented but support for ESP devices is quite lacking. For NXP devices it's great.
There is a thing called a "tethered" or "umbilical" compiler in the Forth world where the target machine has nothing on it but a small comms program that is controlled by the IDE on the workstation. The IDE includes a way to kick off any piece of code interactively on the target from a command line but there is no actual REPL on the target machine, only the running code. This gives you a big IDE while running on a tiny target and reduces a lot of the crying shown in the cartoon. :-)
Is there something similar for other language environments?
I know of nothing exactly like that suitable for professional use.
There are ways to tether a microcontroller to a host IDE, like Linx for LabVIEW or the firmata protocol for lots of languages, but those methods don't let you go from running code on the workstation to running code on the target (easily) like I think you're describing.
http://firmata.org/wiki/Main_Page
The Mathworks' Matlab and Simulink arduino stuff advertises some capability in that regard but I haven't seen it myself.
IIRC, in the 90's there were several Forth implementations that worked as described and appeared to have been somewhat successful in the embedded market. Those were before my time though.
However, with the amount of RAM/FLASH on modern MCU's you can implement the "compiler" code in Forth. I used a similar scheme for a while during experimental phases, but have moved away as Forth code becomes tedious to modify after you haven't looked at it for a while (e.g. a few weeks). It's just easier to setup OTA firmware updates.
I assume Forth, Inc's stuff works and I assume it's still around, but I've never used it. Forth is lots of fun compared to languages with curly braces so it's too bad it was already on its way out back then.
Modern MCUs have so much RAM and flash you could probably run a whole 1980s style development enviroment on them (think say, Turbo Pascal and a collection of tools) on a console, TUI and all.
They worked on a Smalltalk-inspired VM (OOVM) in the early 2000s. So I don’t think they claim that running a dynamic language interpreter is a new invention, ulisp, MicroPython, or otherwise.
I don’t know ulisp. I believe with MicroPython you basically have one monolithic Python application running in the interpreter on the device. I believe their claim is that their system enables you to have several applications run in their interpreter with reasonable isolation between them. So I think it’s more about having a single monolithic application vs a set of specialized applications / services that you can combine.
I believe their original system OOVM did allow for lisp-style (smalltalk-style) interactive development. I don’t know if their new system supports it too.
I feel like that quote might have been intriguing and made some sense on a home computer in 1981 but has no hope of being true today. Home computers in 1981 typically had no hard drives, ran exactly 1 process at a time, had no virtual memory, no internet connection, no GPU, etc., etc. Even the article you linked to is primarily saying that Smalltalk provides it’s own interface to things the OS also does, but is not even attempting to address concepts like multi-process, security, safety, resource contention & prioritization, and all the other things a modern OS does.
So what insight is this quote giving us today, does it still say something interesting, and is it still true in certain ways?
I don’t have a Smalltalk history or know much about projects like LispOS or Toit at all, but I do have a clear (to me) picture of what an OS does that is useful and should not be part of a programming language. One example of that is working on console games, for example Nintendo consoles, before they provided an OS. It was a nightmare because the game developer was on the hook for handling a very long list of abnormal system conditions. The certification process for a game on a Nintendo console required all developers to conform to standards that specified exactly what errors needed to be displayed under what circumstances. Game developers needed to handle cases where a player would accidentally bump open the CD tray or pull out a cartridge in the middle of a level load, or repeatedly yank and replace a controller cable, things like that. Think about how silly it is that every single game developer, thousands of them, had to - separately and individually - spend tons of time re-engineering the same solutions to these things that the console itself should have handled, things that a couple of people at Nintendo could have engineered once for everyone. Well, now they have an OS, and this is only one of the many reasons why. It might not be immediately apparent that Nintendo’s certification standards have any bearing or say on what a home computer should be like, but UX standards in system-wide error messages are important, designated responsibility for which process handles hardware errors and user notification are important, and making sure that efforts that have to be common to all programs are implemented in such a way that devs can’t mess them up and don’t need to reinvent the wheel are important.
Right, and there was even Multics in 1969. I was giving some benefit of the doubt, but I feel like Ingall’s quote was pretty dodgy even in 1981. Maybe it was tongue in cheek, or being intriguing and controversial just to get readers? Maybe there is some point of view that I’m completely missing?
I don't think you're missing anything. That was a very Smalltalk way of thinking at the time. However it flies in the face of separation of concerns, security, experience, .... It's a colorful quote but the marketplace of ideas hasn't been kind to it.
The hardware should be clean and the language should be able to do everything, there should be no OS/user boundary. There is no reason we can't get back to and surpass this previous state.
“The previous state” circa the era of discussion, was punching in via front panel, IO drivers for my disk, keyboard and 16x64 monochrome/composite display. Being that close to the hardware helped me learn assembler, but I would not want to go back there. I definitely would rather let the Linux crew write my IP stack and WiFi driver. And as far as my recent embedded system efforts, I’m pretty stoked running C#/WinForms on the Pi. But then again, I don’t use the OS for much aside from the UI. I use a custom kernel for running the attached machinery.
Why do you say that? What does it mean to surpass the previous state? What would it do that is better than what we have today? What does it mean to not have an OS/user boundary? I don’t know what you mean if you don’t elaborate.
Note: Ingall's quote was about the programming language in general, and in the context of home (desktop) computers. Things are different, and my reply here does not apply, if we're talking about microcontrollers like in the Toit article.
> There is no reason we can’t get back to and surpass this previous state.
Yes there is. There are a whole bunch of extremely critical reasons to have an OS and to separate it from a language, which is why we have them and why we’ve always had them (on desktop machines). I already gave some reasons above, but the reasons in my Nintendo story are some of the least important reasons there are today, and they’re still pretty important.
Here are a few other reasons:
Security. Some processes should be allowed to have special access and do things that most processes cannot. Think about what you’re suggesting if you remove the OS/user boundary: it means that daemon processes written by other people have root access to your system. You do not want that no matter what you claim to want in a programming language.
Management of shared resource contention is something an OS should handle. Do you really want to have to write your program to play nice with the network, hard drive, and GPU? I don’t, it would automatically add months or years to any development projects, even if you had libraries and language features to support it, because it would force you into an asynchronous programming model with a responsibility to handle a large number of error conditions (most of which are out of your control).
The OS handles virtual memory paging, so you would be on your own for providing a memory system that can have a resident size greater than available RAM. Not only that, every process would be on their own, there would be no shared paging file. (When you think about the paging file, don’t forget security).
Other simpler reasons including program bootstrapping (loading and execution), shell & file navigator access, shared system settings (display, audio, network, etc.), temp file creation, etc., etc., etc.
The difference between embedded devices and desktop machines is another good reason not to bake the job of the desktop OS into a programming language. So is the fact that there is more than one programming language - even at it's simplest, the OS boundary makes a great language agnostic interface. (Why should every language implement it's own storage, networking, and display? Wouldn't that be a complete waste?)
I can’t think of any good reasons why there should be no OS/user boundary, so that is my question: why do you want that? What good would it do? How would you handle virtual memory, shared resource contention, and security, if there was no OS/user boundary?
Sure, that’s effectively an embedded system, which puts it in the same category as a microcontroller. Like I said, my reply was about normal desktop machines, and not microcontrollers or embedded systems.
I have to admit it’s interesting in the context of virtualization, where deploying a program to a unikernel virtual machine might be perfectly fine for a lot of programs. In that case, some host OS is still handling security and resource contention, so this seems a little like ducking the question.
The unikernel design in practice does not put the kernel into the programming language either, it just allows compiling the kernel and the language together. It still has an OS/user boundary. Security is either non-existent or very difficult with unikernels. Running multiple programs at once is tricky.
“unikernels are unsuitable for the kind of general purpose, multi-user computing that traditional operating systems are used for. Adding additional functionality or altering a compiled unikernel is generally not possible and instead the approach is to compile and deploy a new unikernel with the desired changes.”
The coupling between the smalltalk language and the environment has become mildly interesting to me recently. I’d tried to see if I could find any of the old Tektronix Smalltalk workstations recently, but sadly they will likely become just another very rare item in my list.
Aside from the massively misleading HN title, I don't see the point of replacing one machine language with another machine language. Even if the latter is a virtual machine language.
The microcontrollers you would use this on (i.e. not the absolute bottom end, which would be too slow for a VM) may not have a MMU, but they do have a MPU. That's enough to get process isolation. And implementing relocation to load an ELF image isn't rocket science.
Also, I'd like to point out the billions of embedded systems implementing safety critical functions like huge industrial robot controls, or the brakes in your car. They're all built (and certified) without a VM. As a matter of fact, the VM would probably make them fail certification, unless it is specified and verified itself to a very fine degree.
> billions of embedded systems implementing safety critical functions
I realize that there is a pattern: when high-level programmers talk about security and reliability, they basically mean "hackers breaking your system to steal your passwords". To the point that memory bugs = security vulnerabilities, and nothing else.
This, of course, has another meaning for embedded.
For application development, a memory corruption bug or a race condition is a safety issue that causes data corruption, and when the same corruption can be used to compromise the system, it's a security issue, as you said, often there's no point to distinguish them. In the embedded world, this distinction can be important. If a vendor says its microcontrollers have safety features, it means an elaborate system of watchdog timers, glitch-filtered inputs, and checksums. On the other hand, "a microcontroller with security features" means a crypto engine and secure key storage memory.
I think the distinction should be clearly made when someone wants to sell me something like Rust. They come up with the daily link about 70% of the bugs are memory issues plus something in the lines of "memory vulnerabilities! think about the hackers! exploits!", when it's clear that these arguments don't click in (many areas of) embedded. Safety and security have another meaning. The steal of a password is the least of my fears, if I think a bad implementation of mine can chop-off the hand of an operator.
The distinction is made quite clearly, one is safety, the other is security :)
(while the terms are intermixed in general discourse, both the embedded and security worlds consistently separate the two, at least in everything I've seen)
I don't think there's all that many safety issues resulting from MISRA-C non-adherence that would be fixed by putting the entire thing on top of a VM. Also, I'd wager almost all issues that a VM can catch are also caught by using a safer language, e.g. Rust (without a VM.)
That said, the really bad safety issues in a car are probably on a much higher semantic level, i.e. "at X level of braking, do Y", which neither a VM nor a safer language like Rust could catch.
Apparently Toyota followed their own internal standards, the code met them, and no bug that would cause unintended acceleration has ever been found. I still wouldn't advocate for the problems that Barr and Koopman savaged as expert witnesses in the lawsuit, but Toyota was held to an extremely high standard in court.
So basically this is an RTOS that runs an interpreter for a script(s) stored in flash. Correct? I didn't understand if this script is precompiled or not.
you have to be interpreting something else that is not C.
But this is a very interesting project in the way it's presented. I did once something similar, running in parallel some sort of reduced LUA scripts.
Even though, after 20 years of working in embedded, my free and unsolicited advice would be this: if you want to learn embedded, no matter how hard you are trying to avoid it, you need to learn some low-level language like C (or Rust or whatever is fancy now).
Put your mind at ease and do the extra mile. At the end learning C is not that hard. And I suppose you are learning, because if you propose Toit in your workplace is because you don't know how to code anything better at the moment.
I don't want to hurt anyone's feelings or sound dismissive but... learn C (or anything low-level). The rest are just toys of the moment.
It really depends on your needs. Overkill microcontrollers aren't that expensive anymore. It's probably faster to prototype something with micropython than C. And I say that as a very proficient C developer.
On the other hand, C isn't hard to learn, since the language is so small. What's harder is to learn its pitfalls and the parts that are actually implementation-specific, and not specified.
I'd argue that learning some assembly (enough for a few toy projects) is more useful to understand the system. C is required if you want to make the most out of the hardware, at least for now.
> Overkill microcontrollers aren't that expensive anymore
I knew that this would be mentioned but I didn't want to write an extensive comment.
In many embedded markets, cents make the difference. Also, space constraints, power consumption, availability, etc., makes you reconsider overkillability/price relationship.
Someone will put a 16KB MSP430 in the BOM and there you go.
What I wouldn't buy is something like "hey, chips are cheaper, let use a prototyped javascript-system over a VM over an RTOS, just because".
> What's harder is to learn its pitfalls and the parts that are actually implementation-specific
Yes. There are tradeoffs, like in everything in embedded.
> It's probably faster to prototype something with micropython than C.
Personally I wouldn't know where to start with micropython.
> I knew that this would be mentioned but I didn't want to write an extensive comment.
Oh, me neither, and I wasn't really talking of big production runs. If you have tight margins, you better take anything you can, and C is probably one of the first tools at your disposal.
> What I wouldn't buy is something like "hey, chips are cheaper, let use a prototyped javascript-system over a VM over an RTOS, just because".
You'd be surprised. The Harmony remotes comes to mind as an (old) example. In environments such as startups, time to market sometimes trumps even common sense. And people (including you and me) just prefer the tools they know most.
> Personally I wouldn't know where to start with micropython.
Funnily, I have never used it (except for a bit on a numworks calculator), but I can't imagine it being difficult once it's up and running on your microcontroller of choice. You probably flash and run as usual, except it's python code and has a repl.
I found micropython to be quite valuable in board bring-up stage. It is nice to be able to interactively test peripherals in repl, and these interactive sessions can be solidified in C code. This article about prototyping esp32 in micropython is great example: https://nick.zoic.org/art/lilygo-ttgo-t-watch-2020/
Yeah gdb is amazing tool. The micropython is few extra steps as you have to build the firmware for your target, and occasionally switch between proper FW and micropython FW. It still makes sense as you can do many things impossible in gdb.
You can define structs, loops and functions on the fly and execute them. You can build a complete driver in interactive session, first by poking around in registers and seeing that the HW reacts per spec, and then assembling correct operations into initialization function and updating function. All of this is throwaway code but it can save large amount of time.
Maybe non-critical systems could be left with the micropython implementation, but so far I haven't learned how to profile and optimize it to satisfaction.
Not on many microcontrollers, where you have no room for the debugging uart ports. I never had the luxury of gdb on my baremetal firmwares, only via selfwritten simulators or emulators. not qemu, qemu supports nothing. renode or unicorn are pretty good.
I'm not talking about running GDB ON the microcontroller (that's unlikely to be possible in most cases), but instead using OpenOCD (or JLinkGDBServer, or equivalent) to debug the running image via JTAG/SWD using GDB.
One could say that because you don't want to look at micropython now everybody else has to learn C.
You earlier made an appeal to authority that people should just learn C/low-level language, and then admitted you have no clue how you'd get started with micropython. What makes your appeal to authority... authoritative, then?
Not saying your conclusion is wrong, but here's some unsolicited advice, if you may take it: maybe you could explore the approaches you dismiss before professing that they shall be dismissed.
The above was a joke. And where did I say “I don’t want to look at micropython”?
I do know that if I want to learn it, it isn’t something out of my reach. But there are things you can do with C you can’t do with micropython, so other than from academic standpoint, what’s the point? I am already proeficient with C.
That said, I don’t care what other people do. I had good intentions with my advice, talking from 20 years of experience in the field.
My first computer was a Timex 2068, and I learned C long time ago, enough to be dangerous and knowing only to touch it when there is no other modern option available.
Most of the stuff people do in the Makers community is more than doable with Micro/CircuitPython + a bit of Assembly.
My understanding with toit vm you have can have multiple programmes running and if one crashes, the others continues. The vm, etc, allows live update and or to deploy something new.
If you're interested in hearing Kasper talk about toit, he did a long livestream the other day, the toit bit is at the 1hr 34min mark.
https://youtu.be/k7YITNpvcaY?t=5640
It's conceivable that microcontrollers with memory management units could become more commonplace. Your memory-trashing program could be restarted while the device keeps on doing other things, rather than the traditional bare-metal approach of having a watchdog restart the device.
> The problem is that on microcontrollers everything is compiled, linked, and deployed together using really old-fashioned tools.
Not necessarily. There are modern and pleasant workflows using Rust.
This article is somewhat general. Ie, what is his intended use case? What is he building? Part of the beauty of embedded programming is you can avoid the complications of abstractions like virtual machines and operating systems, and the performance penalties that accompany them.
Also, there are a variety of interpreted language environments for microcontrollers, including at least one in Python and one in lisp. Saying that microcontrollers must used statically compiled code is objectively incorrect.
Now most projects will use C due to space issues, but that’s a different argument.
Most projects use C because there was zero consideration for anything else. No embedded team sits down at the beginning of the project and says "should we use lisp?". It's just C. Or maybe "C, but use a C++ compiler with less UB".
Indeed, if one moves into maker community there have been Basic, Pascal, Oberon, Java compilers for ages.
Those companies manage to stay in business selling such compilers for at least three decades now, which kind of proves there is a market of people willing to pay for them, instead of going with C like everyone else.
CircuitPython (derived from MicroPython) is objectively pretty excellent, and running on an increasing number of platforms. It's been a really great platform for education.
Kaspar Lund is very smart and amazingly productive so this sales pitch is disappointing. Amazon's work on formal verification of portions of FreeRTOS, for example, is sophisticated and impressive. What Lund misses in his characterization of FreeRTOS as "primitive" is how much refinement goes into simplicity in embedded work.
Here's an example: FreeRTOS' over-the-air update (OTA) mechanism has been formally verified. Why would you formally verify your OTA update mechanism? Because getting OTA updates right is one of those simple things that turns out to be hard. Hard problems aren't solved by just adding another layer of abstraction; my expectation is that it will be more difficult to establish a Toit system is performing correctly than a conventional bare-metal or RTOS-based embedded system.
Hmmm. If I chose a chip with the resources to run this Toit thing, I can reduce costs by switching to a cheaper, less powerful chip and writing native code. If I'm shipping several million of these, it's a no-brainer; it saves real money. In some products I've worked on, saving ten cents per board would be a big deal. Also note that you're paying more than money; fancier and more capable chips usually consume more power, so imagine some lively conversations with the industrial design folks about battery lifetime and size, too.
If you're using off-the-shelf consumer-y ESP things, you're likely insensitive to cost and you're probably paying for performance you don't need.
I last saw this technique (maybe 15 years ago) used by a super tiny .NET runtime environment intended for devices like smart watches; you wrote to "tiny" APIs, a tool did a bunch of reprocessing of your binary into smaller tables and whatnot, and poof, it would run on a watch. With all the GC and so forth going on, it's not something I'd want to write a whole embedded system in, though it was okay for applets.
I'm not oblivious to the security advantages of using a bytecode interpreter, but on a high-volume product you'd have to make a case for how important this kind of thing is.
ESP32 is extremely price competitive in its space. The use case is internet connected applications, which are going to need the kinds of resources Toit requires anyway.
What gets me is the implicit promotion of javascript as the gold standard of available tooling. That gives me hives, even though it's probably true.
Seems to be heavily influenced by the Smalltalk language and VM. I find it interesting that people will say that embedded microcontrollers are too resource constrained for this type of system, despite many being significantly more powerful than the original Xerox workstations that ran Smalltalk. And those machines were considered mini-computer-class machines at the time.
People who write these bait-and-switch marketing articles really need to learn that once you do it, you lose potential readers for a long time. There are so many interesting things to read every day - I'm not going to waste any of my precious attention on his writing again after that.
I got interested by the idea of a virtual machine for microcontrollers, but I'm less excited about having to use a new dedicated programming language, and the ESP32 being the only supported hardware.
It seems like a neat technology but it would be useful to have some concrete examples of what you can do.
The company sales pitch seems to be about fleet management and over-the-air updates. This doesn’t seem very relevant to me considering that I’m a hobbyist flashing a single microcontroller over USB, and I never run more than one program at a time. But perhaps I’m just not the target market and I’m not imagining the right use cases?
Over the past 8 months I’ve been working with Rust on STM32’s and it’s pretty impressive to say the least.
For me, a solid, well documented HAL that works well is key to efficiency in embedded development.
Rusts language features and the ability to abstract things like SPI or I2C peripherals that were once impossible in C are totally doable in Rust, and the community is growing.
I can write a Rust library for an OLED device that is driven by a platform agnostic I2C device that will run on any microcontroller that implements the necessary abstractions.
All sounds amazing. On esp32 though I'd settle for simply a way to emulate the TFT display. It gets old re-flashing the device to move a pixel one to the right or to add a println for debugging.
Kasper Lund has a lot of my respect for work he did on V8 and Dart so I was hoping this was going to be super interesting article about trying to struggle to.... make V8 run on a specific microcontroller!
I think I feel more cheated because of his background and that it's highly featured on HNews.
Yeah, as someone who works with microcontrollers professionally and would always love better tools to work with, this article didn't do much to sell me.
It is a nice bit of marketing in the way it positioned itself. I don't think I'd be able to articulate why I'm not rushing to use this without getting called an old fogey.
He does offer a few indirect references as to why running V8 on a microcontroller would suck. Like the section on polymorphic inline cache for method resolution.
Running V8 implies JIT and there is no JIT on most microcontrollers because they're based on Harvard architecture where no code can be written in data memory.
A large number of microcontrollers are ARM based, which isn't Harvard architecture. I presume the thinking is that there's some massive number of Harvard architecture chips in mundane things that lets us say "most" are Harvard arch?
I feel like the title is a bit misleading - the article only mentions V8 once, and is actually about a (still interesting) new VM solution for ESP32s (and potentially others) that provides a more ergonomic development environment and simplifies things like having multiple programs running on the same MCU, OTA software update/install, and pseudo-isolating memory between programs. Easy/low-risk OTA updates in particular seems like it could be pretty killer to me - I know there are ways of doing it today, but the caveats are significant.
From the title I would have expected this to be an article on V8, however there is barely any mention of it besides the title and the author saying he worked on it in the past. Instead, it seems to be an advertisement for a new language / VM, which makes it feel like a bait and switch. (even through the content itself is interesting)
I see how the title might mislead one into thinking that the author was about to outline his problems fitting V8 into a uC, but I don't understand the upset that he was successful.
If successful, the streaming service with lack of relocation should be a useful tool in the toolkit for working with microcontrollers. I don't see much use for it for me at the moment, but then I haven't used uLisp much either.
It's hard to substantively comment on this when the article is so tl;dr and written in wall-of-text style. What exactly is this proposed tech doing that can't be done by running a simple WASM interpreter on the bare metal? (JIT itself is generally off-limits on microcontrollers due to Harvard architecture, so interpreted code is the best case.)
Embedded development is tricky, and anything that can help improve the productivity is useful.
The problem is that there have been many, many attempts at doing this exact same thing over the years. And they all have to fight the same fight: to get some serious traction for their particular approach.
Back in the 00's, Sentilla did a Java VM for the MSP430. This got a lot of traction - they even had James Gosling himself vouch for the project.
More recently, Electric Imp and Particle.io have taken similar approaches, but with other languages.
Micropython and a variety of mini-JS engines are also around. Espruino, Tessel.io, and many more.
In a technical sense, running a virtual machine on a microcontroller is a really good way to go. You remove a lot of low-level friction that you otherwise have to deal with. And you get a lot of things almost for free, such as Over-The-Air (OTA) updates, and more control over security.
But when you get down to actually doing the work, you frequently end up wanting to have all that low-level access, even if you have to endure a bit of pain. Sometimes you don't even want to have an operating system with a hardware abstraction layer. You kinda want the bare metal.
So Toit are in a tricky situation. They may have a really good product. But they will have to get some serious traction. Developing a rock solid programming language and virtual machine is hard. Really hard. But it is way easier than to get that sweet, sweet traction.