That's reasonable. There's a lack of good, free OSs for the space above Arduino and below Linux on Raspberry Pi. That's the land of VXworks and QNX, which are good, but not free. So people end up running an entire Linux environment, with all its vulnerabilities, on something that doesn't need it.
Yocto Linux can be stripped down to the bare minimum for embedded applications. You can reduce your attack surface by a lot. The advantage of Linux on embedded is your new developers are up and running immediately. Something not possible with all the rest of os/rtos. Additionally lots of packages for diagnostic utilities like iperf etc are available even for obscure processors that run Linux. As long as battery life isn't a concern, let embedded Linux be an overkill, the advantages are just too great.
FreeRTOS isn’t an OS. It’s a nice scheduler. Which is what the vast majority of people want and need when they think embedded.
Zephyr, is an OS. And imo will be eclipsed by embedded variants of Linux quickly as chips are coming up in performance. There is just no path for co-existence I see.
Zephyr and Linux are not competing, generally. There are literally billion devices shipped annually in Zephyr niche - microcontrollers with under 1 MB RAM. This will continue the next 10 years - as for significant of the market cost reductions will go into power and radio improvements - not spending money on RAM and FLASH to run Linux.
Linux can do real-time scheduling. Most of the patches for that have been merged into mainline at this point, but they've existed out of tree (with real use) for a long time.
It looks like Linux is doing higher-level stuff, and the real-time is done by a pair of redundant MCUs and a FPGA.[0]
It's running F´ (F Prime) flight control software,[1] which can run under Linux, as well as on MCUs with no OS. I'm not sure on which processors it's running.
"The two redundant TI Hercules safety processors serve as the low-level flight controller (FC); each has dual-core lockstep ARM Cortex-R5F and ECC protected Flash and RAM. The
two processors run in sync and are provided with the same clock and data by the FPGA, which handles all the
sensors and actuators interface. The lockstep mechanism does cycle by cycle error detection. If a fault is detected,
it signals the error to the FPGA; the FPGA switches to the other processor and power cycles the faulty one,
so the flight control software continues to run without disruption.
..
At the heart of the helicopter avionics is a Field-Programmable Gate Array (FPGA). The FPGA implements
the custom digital functions not implemented in software due to resource limitations of the processors (e.g. I/O or
bandwidth limits), timing requirements, power considerations, or fault tolerance considerations. The FPGA device
is a military-grade version of MicroSemi’s ProASIC3L, which uses the same silicon as the radiation-tolerant device
from the same family. The FPGA perform all critical I/O to the sensors and actuators, and fault managment functions
including detecting error flags from the MCU and hot-swapping to the functioning MCU in case of an error.
The FPGA performs vehicle flight control including an attitude control loop operating at 500 Hz, an outer motor
control loop, waypoint guidance, sensor I/O from the IMU, altimeter and inclinometer, and analog telemetry for current
and temperature sensing. It is responsible for system time management, interfaces to the IMU, altimeter and inclinometer
sensors. It implements the “inner” motor control loop used for the two brushless rotor motors and the six brushed motor
servos (three at each rotor swashplate), as well as power management and thermal control functions."[0]
Ingenuity had a relatively shoe-string budget and leaned heavily on off-the-shelf components. It was also a technology demonstration rather than anything mission-critical, so it didn't need absolute reliability.
Which isn't to say that NASA wouldn't necessarily use Linux in mission-critical applications; there exist real-time patches for Linux.
Wouldn't something like tiny core Linux be the best compromise? Its the Linux kernal + bare essentials. You could probably harden it and modify it for this scenario
The term "microcontroller" covers about a gazillion different chips, most of which this doesn't address. (The one which makes your Christmas tree lights flash?) It needs to be qualified.
One of the Tock maintainers here, happy to answer any questions.
For some context, Tock has been around for a number of years now (we first built it in 2015). Beyond it's intended original purpose (naval gazing academic research) it runs the Chromebook embedded controller, has similar other similar applicatioms in data center root-of-trust, Microsoft's Pluton (rumor has it, for some future version anyway), and others.
I was unimpressed after reading Tock's home page (and I think I read it all), but then I was quite impressed by your "runs the Chromebook embedded controller" (because the ChromeOS team seems to me to be great at security).
I humbly suggest adding that to the home page (being careful to mention that the ChromiumOS EC repo has not been updated to reflect that fact and to mention that before about a year ago, something else ran the EC).
Nope. It used to be a home baked solution that was totally unrelated to (and pre-dates) Zephyr. I believe that is the one that is still in the ChromiumOS EC repo, but for the last year or so, it's a Tock-based system.
Oh my apologies! I hadn't known about that intermediate version.
I don't know of a public announcement. Tock's license is acknowledged in Chromebook's licenses and those involved in Tock know simply because that team talks to us (a few of us interned on that team back when we were PhD students to help the effort at various stages as well).
It's not a secret, but it's also not something that seems to be high on anyone's todo list over there to announce.
Correction, it's not the EC that runs Tock, but rather the GSC (the creatively named Google Security Chip). It used to run a system called Cr50 while recent versions run Ti50, which is Tock-based.
In principle many things, but you're mostly restricted by what you can meaningfully physically build with off the shelf parts. Prototyping with a bunch of wires on a bread board is one thing... Sticking that on your wall is another. And most off the shelf dev kits have very few useful sensors/actuators on board.
The cannonical thing is an environment sensor (so a temperature et al sensor). Unfortunately, often a larger HVAC system will integrate with sensors through some proprietary protocol (even Thread based systems like Nest don't necessarily work with just any thread-based sensor, but you could of course hack something with Home assistant).
One of the maintainers has been using it on the side for a plan monitoring system.
All of these require some sensors beyond just the dev kit, which are typically pretty bare bones.
You can build a two factor auth device with the nrf52840 dongle (OpenSK is built with Tock), which requires basically no extra peripherals.
I've used the nrf52840dk to control a garage door, which similarly only need a couple wires.
Note bcantrill's comment below for corrections of my misstatements here.
Oxide was involved with Tock before building Hubris. They are similar in some ways with somewhat different "visions" for the end use case. Hubris compiles all "applications" into the "kernel", and relies exclusively on the type system for isolation---in this sense it is a much more traditional RTOS, just written in Rust (and well designed!). Whereas Tock targets applications that are not just in Rust, and may be dynamically load/replaced/removed separately from the kernel---in this sense it is much more similar to a traditional desktop/server OS but designed for significantly lower resourced settings. This also makes Tock more robust to applications that are actually untrusted or unreliable.
There is also of course a difference in development and evolution. Hubris is developed by Oxide and released open source, while Tock is more community based and, thus, more open to supporting a variety of use cases. This is both bad and good. If your use case is exactly Oxide's use case, it's likely Hubris is better than Tock (there is just no design decision baggage for other use cases). If you're use case is a bit different, Tock starts to be more appropriate.
A side note that I think the Hubris-Tock relationship is a real positive case for open source development. Oxide was very transparent and forthcoming when they decided to switch away and offered a lot of useful feedback. I hope they took some ideas from Tock and I think we took some ideas from following Hubris.
That is not correct. Hubris -- very importantly -- uses the MPU to isolate applications from one another and from the kernel: if any application accesses memory that it is not permitted to access (either in I/O space that has not been assigned to the application or in another application), it will fault and (by default) be restarted. Moreover, we make sure that the stack for a given application grows towards a protection boundary (rather than towards its own data), assuring that a stack overflow (our most common fault, by far!) does not result in an application corrupting its own data but rather in that application dying.
It is definitely true that Hubris does not have (and never will have) a dynamic loading facility: dynamic loading is very important to Tock, but we saw that it was taking us not just away from our use case but directly contrary to it. In contrast, Hubris has exclusively static task assignment -- which has proved to be a very important constraint for overall system robustness as it allows things task restart to happen without fear of unavailability of resources. Cliff Biffle expands on more details of Hubris in his OSFC 2021 talk[0].
I also don't think it's accurate to speak of an "exact use case" for Hubris, as we ourselves use it in disparate applications: among other things, it runs our root-of-trust, our service processor, our power shelf controller, and on our manufacturing line to program parts. What these use cases have in common is that they are embedded microcontrollers in which robustness is essential. This is not to say that Hubris is a fit for all embedded use cases, of course -- but the fit is certainly more broad than how we happen to be using it.
In terms of other contrasts to other embedded systems, we have spent quite a bit of time on debugging infrastructure, with our debugger being co-designed with the operating system; more details on this in Matt Keeter's OSFC 2023 talk.[1]
Good question, I don't know! I would guess it's a home baked solution, but I believe Pluton, the hardware, is changing as well (I am not an authority on this, I only know a bit from talking with the developers at Microsoft who work on the next gen stuff).
Just to illustrate how hard is to naming things without accidentally colliding with some meaning in some language: “tockos” in hungarian means a light smack, specifically one aimed at the back of the head. Luckily that is something I quite often want to give to computers (micro or otherwise). So all is well with the balance of the universe.
There are certain lines that include security features designed for IoT. Becoming more popular. Arm Cortex-m33 etc. so, stm32 L5, U5, nRF-53 etc. Lots of options if you want this. But most MCUs don't have them.
There's also MPU in even simpler/cheaper MCUs. For instance, ARM Cortex M0+ sports an MPU, and this architecture is used in STM32C0 ($0.24 in bulk) and RP2040.
I have no idea how the landscape looks in general, though.
The vast majority of modern MCUs have enough memory protection for Tock. Anything cortex-m0+ or "better" has an MPU. RISC-Vs PMP or ePMP as well. Most 16-bit "legacy" (though still popular) MCUs don't.
Virtually anything with a radio these days (the MSPs were holdouts but mostly those are Cortex-M these days as well)
> Security critical devices, like TPMs and USB authentication fobs, are actually multiprogramming environments running applications written by different people.
Eh, that doesn't sound like the design of a security-critical device to me.
Many a "secure execution environment" has failed to deliver its security guarantees after some third-party clown was given access to run their DRM module or whatever, and introduced vulnerabilities while doing so.
Devices that are serious about security tell the third party clowns to get their own chip. Although they may use a secure operating system as an extra layer of security, if it promises that.
Is this for if you are writing firmware that has non-cooperative processes (eg installable applications)? It sounds like it from the description. Of note, it seems like in C and C++, it is common use an RTOS for any project. In rust, you can get away with a lot (including complex programs with lots of IO) without one. Eg, using interrupts, DMA, timers etc to build an event loop and manage asynchronous events.
Right, that's my question. You can use flash and display and whatnot without an OS. When are you using a microcontroller that you'd need the overhead of an OS where an SoC wouldn't be appropriate?
Again, these are not binary. I may decide I need a more feature-full (more "OS-like") task switcher for what I'm trying to do, without going to a full OS (by some definition). I may decide that I need some aspects of SoC in my microcontroller. (Youi can get pretty much any set of features with any core; it's more the number of desktop-like features that makes it a SoC.)
To your specific question: I'm not sure it goes that direction, because a full OS depends on the presence of certain features on the chip. I can think of some examples of it going the other way, though - of an SoC where you don't need or want a full OS.
It's the application that needs an OS, not the computer. It is possible (and isn't unheard of, though decreasingly common) to run software on a more featureful CPU with virtual memory and loads of RAM next door, without an OS, for example.
And it of course depends what one means by "a OS." But, generally, if you are running multiple tasks that might depend on shared resources, you might want an OS---after all, an OS is just something that mediates shared resources among different applications.
You might prefer to use a microcontroller because of power constraints, security (generally easier to mitigate physical attacks and side channels in simpler hardware), or cost and you don't need more resources.
If you're even talking about virtual RAM, you're already in SoC territory. And the concept of an "application" on a microcontroller is foreign to me. I still don't get it.
Shameless plug for anyone interested in these kinds of systems and near or able to travel to San Diego, that TockWolrd is happening at the end of this month and has general-audience oriented talks and tutorials for this first year! Please join us! https://world.tockos.org
For that Nordic 52 dongle... the ones I bought early on had locked bootloaders and only allowed apps on top of that, AFAIR, and needed at least some HW debugger connection to remove it.
Is this still necessary, included for TOCK or is not needed at all?
How is this a an appropriate answer to a (not totally baseless) complaint about bad developer ergonomics? GP didn't say how hard they tried learning rust. This comes off as needlessly condescending.
More than the syntax, it's the verbosity. The more you have to read the more you have to process. Did anyone in the creators of Rust have a look at the syntax of Crystal, Kotlin, Julia, or Elixir ? They're simple, you really don't have to put a lot of effort into reading.
Rust looks like C took the verbosity of Java as a mandatory requirement and borrowed some cool concepts of Haskell (but not the simplicity of its syntax though).
Interesting, I find rust is pretty dense, but mostly in a good way. It packs a lot of important information into the code, which is useful for reasoning about it. Languages with simpler syntax tend to hide that information (and sometimes because it's irrelevant due to the kind of tradeoffs they make, like all the languages you mention have some form of GC). Haskell I would say is especially hard for me to read, when there's so little syntax the code is just a sequence of functions and symbols without obvious delimiters it becomes a lot harder for me to understand the structure of it.
Part of it is the explicitness mandated by the language (i.e. when dealing with Optionals or Results, you have to deal with both cases or explicitly state which you expect as an invariant). There's also curious (imo) choices on the part of rustfmt that seems to favor density of code that way.
I assert that certain syntaxes work with how certain people think, and don't work with how certain other people think. You can train to get around that to some degree, but it's not entirely a skill issue.
Again, without stating something more substantive about why the syntax is "incomprehensible" to them, my best assessment is always going to be "skill issue".
Wow, this is just doubling down condescension again, while insinuating laziness baselessly. I resent that, especially when it is about a language that has parts that were intentionally designed to yield hard to read code.
Well if I want to write an OS in Fortran, BASIC, nodejs or even assembly, I think it is my choice. If you don't like it, just ignore it, I really don't see the beginning of a problem there.
I understand complaints in other contexts: for instance, if one sends tens of thousands of satellites or burns forests, the mere fact that they are doing it may bother me. But again: what do you care if somebody writes a software project in a language you don't like?
Hmm... if you are part of the creators of the project, of course. Now if you don't like Rust, you should not become a maintainer of a Rust project, it's as simple as that. No need to go complain on every Rust project you find online.
Same for your own project: don't become dependent upon a project that you don't want to have to deal with... If you do, then don't go complain about it: you made the choice to depend upon it.
As someone who does some embedded development and also has a deeper general software background than most full time embedded developers: having a few large players in the MCU and/or industrial automation sector support a from-scratch effort to build a decent embedded OS in Rust, with ditto tooling, would be a game changer.
Most current embedded and RTOS operating systems and development toolchains are just horrific. Hence, most of the things around us that control important things, depend on much weaker foundation that you'd like to imagine.
This is mostly a manifestation of MCU manufacturers having trouble just managing their own challenges and not having the capacity to even start thinking about what it is like to use the devtools for customers.
They love Zephyr because it helps them solve their problems. That it is a pain in the ass for developers isn't something they seem to spend much time thinking about. Or even seem to be aware of the importance of.
You can think of Rust as a containment zone to isolate the kind of people who would be a Rust programmer from people working on actual real things, obviously in C and C++.