I've seen something related, which returned a bool instead of failing compilation, be used to switch between a path the optimiser could inline and some assembly. You could probably use this to make sure it was always inlined.
It depends on what orbit you want. Due to the latitude you'll end up pretty inclined by default - which is bad for equatorial orbits sure, but a good start for polar ones. At a public session I went to several years back they spoke of specifically trying to attract polar launches for that reason.
Not the OP, and not sure why its particularly suited for polar, but its a good location for these reasons:
1. Good atmospheric conditions.
2. Low air traffic.
3. Low/flexible/favourable regulations.
4. Good locations for ground infrastructure (in part because of 1. and also because everywhere is by the sea).
5. As I mentioned earlier in the thread, tertiary education geared to support the industry.
Also, and I don't know for sure this is a factor, but we have a number of specialised industries such as building large things from carbon composite (yachts) and radio communications for example.
I think option 3. is a big one. The govt. attitude is usually "give it a go", rather than a default "no".
Thanks. How big a factor is it that you aren't circling Earth's axis as quickly as most other places, and thus launches lose some boost? That's what I was wondering originally.
That being said, you can find quite a lot of uses in the modules written in Python - though I'm not going to pretend I can tell you the reasoning for all them.
> For regular programmers, if your machine won't boot up, you are having a bad day. For embedded developers, that's just a typical Tuesday, and your only debugging option may be staring at the code and thinking hard.
Of course where it becomes even more fun is when it's a customer's unit in Peru and you can't replicate it locally :). But oh how I love it. I have definitely spent many a day staring at code piecing things together with what limited info we have.
But to get back on topic, I can definitely confer on the quality of most embedded compilers. It's a great day when I can just use normal old gcc. I've never run into anything explicitly wrong, but I see so many bits of weird codegen or missed optimisations that I keep the disassembly view open permanently, as a sanity check. The assembly never lies to you - until you find a silicon bug at least.
I have worked on a device with this exact same "send a tiny sensor reading every 30 minutes" use case, and this has not been my experience at all. We can run an STM32 and a few sensors at single digit microamps, add an LCD display and a few other niceties and it's one or two dozen. Simply turning on a modem takes hundreds of microamps, if not milliamps. In my experience it's always been better for power consumption to completely shut down the modem and start from scratch each time [1] - which means you're paying to start a new session every time anyway. Now I'll agree it's still inefficient to start up a full TLS session, a protocol like in the post will have it's uses, but I wouldn't blame it on NAT.
[1] Doing this of course kills any chance at server-to-device comms, you can only ever apply changes when the device next checks in. This does cause us complaints from time to time, especially for those with longer intervals.
Power Saving Mode (PSM), a power-saving mechanism in LTE, was specifically designed to address such issues. It allows the device to inform the eNB (base station) that it will be offline for a certain period while ensuring it periodically wakes up to perform a Tracking Area Update (TAU), preventing the loss of registration. This concept is similar to Session Tickets or Session IDs in (D)TLS—or at least, that’s how I like to think about it. However, there are no guarantees that the operator will support this feature or that they will support the report-in period that you want!
Maintaining an active session for communication between the endpoint and the edge device is highly power-intensive. Even with (e)DRX, the average power consumption remains significantly higher than in sleep mode. Moreover, the vast majority of devices do not need to frequently ping a management server, as configuration and firmware updates are typically rare in most IoT deployments.
Great pointer! My sibling post in this thread references a few other blog entries where we have detailed using eDRX and similar low power modes alongside Connection IDs. I agree that many devices don't need to be immediately responsive to cloud to device communication, and checking in for firmware updates on the order of days is acceptable in many cases.
One way to get around this in cases where devices need to be fairly responsive to cloud to device communication (on the order of minutes) but in practice infrequently receive updates is using something like eDRX with long sleep periods alongside SMS. The cloud service will not be able to talk to the device directly after the NAT entry is evicted (typically a few minutes), but it can use SMS to notify the device that the server has new information for it. On the next eDRX check in, the SMS message will be present, then the device can ping the server, and if using Connection IDs, can pull down the new data without having to establish a new session.
Is "Non-IP Data Delivery" (basically SMS but for raw data packets, bound to a pre-defined application server) already a thing in practice?
In theory, you get all the power saving that the cellular network stack has to offer without having to maintain a connection. While on protocol layer NIDD is almost handled like an SMS (paging, connectionless), it is not routed through a telephony core (and hence sloooow). The base station / core will directly forward it to your predefined application server.
It has been heavily advertised, but its support is inconsistent. If you are deploying devices across multiple regions, you likely want them to function the same way everywhere.
802.11 supports the same thing. A STA (client) can tell an AP that it'll be going away for some time, and the AP will queue all traffic for the STA until it actively reports back. Broadcast traffic can also be synchronized to particular intervals (but low power devices are usually not interested in that anyway for efficiency reasons).
I have very little experience with Wi-Fi, as the industry I worked in relied almost exclusively on cellular networks. However, I wonder how many Wi-Fi routers actually support this functionality in practice - as queing traffic means you need to cache it somewhere.
Author of this post here -- thanks for sharing your experience! One thing I'll agree with immediately is that if you can afford to power down hardware that is almost always going to be your best option (see a previous post on this topic [0]). I believe the NAT post also calls this out, though I believe I could have gone further to disambiguate "sleeping" and "turning off":
> This doesn’t solve the issue of cloud to device traffic being dropped after NAT timeout (check back for another post on that topic), but for many low power use cases, being able to sleep for an extended period of time is more important than being able to immediately push data to devices.
(edit: there was originally an unfortunate typo here where the paragraph read "less important" rather than "more important")
Depending on the device and the server, powering down the modem does not necessarily mean that a session has to be started from scratch when it is powered on again. In fact, this is one of the benefits of the DTLS Connection ID strategy. A cellular device, for example, could wake up the next time in a completely different location, connect to a new base station, be assigned a fresh IP address, and continue communication with the server without having to perform a full handshake.
In reality, there is a spectrum of low power options with modems. We have written about many of them, including a post [1] that followed this one and describes using extended discontinuous reception (eDRX) [2] with DTLS Connection IDs and analyzing power consumption.
You can actually usually get a pretty good starting point from just a single build, and only refine it once you find a build it breaks on. It's essentially just finding a unique substring. In my experience this almost always involves some wildcard sections, so the signature in the parent got lucky not to need them. I like to think about it as more of matching the shape of the original instructions than matching them verbatim.
To manually construct a signature, you basically just take what the existing instructions encode to, and wildcard out the bits which are likely to change between builds. Then you'll see if it's still a unique match, and if not add a few more instructions on. This will be things like absolute addresses, larger pointer offsets, the length of relative jumps, and sometimes even what registers the instructions operate on. Here's an example of mine that needed all of those:
Now since making a signature is essentially just finding a unique substring, with a handful of extra rules for wildcards, you can also automate it. Here's a ghidra script (not my own) which I've found quite handy.