Most embedded systems are distributed systems these days, there's simply a cultural barrier that prevents most practitioners from fully grappling with that fact. A lot of systems I've worked on have benefited from copying ideas invented by distributed systems folks working on networking stuff 20 years ago.
I worked in an IoT platform that consisted of 3 embedded CPUs and one linux board. The kicker was that the linux board could only talk directly to one of the chips, but had to be capable of updating the software running on all of them.
That platform was parallelizable of up to 6 of its kind in a master-slave configuration (so the platform in the physical position 1 would assume the "master role" for a total of 18 embedded chips and 6 linux boards) on top of having optionally one more box with one more CPU in it for managing some other stuff and integrating with each of our clients hardware. Each client had a different integration, but at least they mostly integrated with us, not the other way around.
Yeah it was MUCH more complex than your average cloud. Of course the original designers didn't even bother to make a common network protocol for the messages, so each point of communication not only used a different binary format, they also used different wire formats (CAN bus, Modbus and ethernet).
But at least you didn't need to know kubernetes, just a bunch of custom stuff that wasn't well documented. Oh yeah and don't forget the boot loaders for each embedded CPU, we had to update the bootloaders so many times...
The only saving grace is that a lot of the system could rely on the literal physical security because you need to have physical access (and a crane) to reach most of the system. Pretty much only the linux boards had to have high security standards and that was not that complicated to lock down (besides maintaining a custom yocto distribution that is).
Many automotive systems have >100 processors scattered around the vehicle, maybe a dozen of which are "important". I'm amazed they ever work given the quality of the code running on them.
Indeed. I've been building systems that orchestrate batteries and power sources. Turns out, it's a difficult problem to temporally align data points produced by separate components that don't share any sort of common clock source. Just take the latest power supply current reading and subtract the latest battery current reading to get load current? Oops, they don't line up, and now you get bizarre values (like negative load power) when there's a fast load transient.
Even more fun when multiple devices share a single communication bus, so you're basically guaranteed to not get temporally-aligned readings from all of the devices.
I run a small SaaS side hustle where the core value proposition of the product - at least what got us our first customers, even if they did not realize what was happening under the hood - is, essentially, an implementation of NTP running over HTTPS that can be run on some odd devices and sync those devices to mobile phones via a front end app and backend server. There’s some other CMS stuff that makes it easy for the various customers to serve their content to their customers’ devices, but at the end of the day our core trade secret is just using a roll-your-own NTP implementation… I love how NTP is just the tip of the iceberg when it comes to the wicked problem of aligning clocks. This is all just to say - I feel your pain, but also not really since it sounds like you are dealing with higher precision and greater challenges than I ever had to!
Here’s a great podcast on the topic which you will surely like!
The ultimate frustration is when you have no real ability to fix the core problem. NTP (and its 'roided-up cousin PTP) are great, but they require a degree of control and influence over the end devices that I just don't have. No amount of pleading will get a battery vendor to implement NTP in their BMS firmware, and I don't have nearly enough stacks of cash to wave around to commission a custom firmware. So I'm pretty much stuck with the "black box cat herding" technique of interoperation.
Yeah, that makes sense. We are lucky in that we get to deploy our code to the devices. It’s not really “embedded” in the sense most people use as these are essentially sandboxed Linux devices that only run applications written in a programming language specific to these devices which is similar to Lua/python but the scripts get turned into byte code at boot IIRC, but none the less very powerful/fast.
You work on BMS stuff? That’s cool- a little bit outside my domain (I do energy modeling research for buildings) but have been to some fun talks semi-recently about BMs/BAS/telemetry in buildings etc. The whole landscape seems like a real mess there.
FYI that podcast I linked has some interesting discussion about some issues with PTP over NTP- worth listening to for sure.
Yes even 'simple' devices these days will have devices (ADC/SPI etc) running in parallel often using DMA, multiple semi-independent clocks, possibly nested interrupts etc. Oh and the UART for some reason always, always has bugs, so hopefully you're using multiple levels of error checking.
Yeah, it was a "fun" surprise to discover the errata sheet for the microcontroller I was working with after beating my head against the wall trying to figure out why it doesn't do what the reference manual says it should do. It's especially "fun" when the errata is "The hardware flow control doesn't work. Like, at all. Just don't even try."