The problem is that the competition in embedded is generally an utter pile of shit. Good luck even talking to some vendor if you're not willing to order 100k or more and don't have inside connections, BSPs (bootloader, kernel, userland libraries) are generally years out of date forked versions that barely build, and serious documentation is often behind NDAs...
RPi has a lot of room to sink before it gets threatened by the competition.
This one kinda makes sense, buy a multiplayer desktop app and add ChatGPT as a default player. I dont think they care too much about the "human" collaboration. API's are much harder to sell than fancy apps.
but consumers do? (almost) Nobody will pay apple level prices for something that has less software support and, overall, a worse experience than on macos. TBH, other than gaming, not sure why anyone is paying 1k+ for a Windows laptop.
Come to any engineering schools, and I mean real engineering like Mechanical, Electrical, Civil and you will see almost no one uses Mac. The software just isn't there. And even when the software is available, the exhorbitant price of RAM make it a bad deal for many students.
Because despite the hyperbolic praise of the hackers here, Apple devices are worthless for some professional workloads.
Before you ask, I'm one user who has a Razer Blade 16 (4090) and Galaxy Book 4 Ultra that I use for CAD and BIM work. I'll pass on having a dedicated video editor.
I'm not sure you should expect much development for a microkernel that has been proven to meet its specification and to be free of bugs. Microkernels are supposed to have a very small API, so there's not much development needed there unless it doesn't the specification or has other kinds of bugs, which is obviously not the case here.
By it's nature at this state (after the full proving effort is complete), any changes to the main/master branch are inherently "stable" and PRs are only accepted once they reach that state.
Prob beacuse they are like super-behind in the cloud space, it is not like they wouldn't like to sell the service. They ignored photos privacy quite a few times in the icloud.
Was it a secret? You could have guessed that something advertised [0] for "AI" had some kind of SIMD. Even ChatGPT 3.5 can give relevant code to use "AI" features [1].
True - load and store mask off the bottom 4 bits of the address. They try to help the situation by including an instruction which can shift a pair of 128-bit registers by bytes.
And the author is not documenting them either, just announcing his new niche library. It is not like disassembling a few functions to prove that they exist is dark magic. I just don't see any value in the article.
maybe I am missing something but isn't it barely faster than the offical ESP32_JPG? But fair enough, didn't know than JPEG decoding on MCUs is a widespread thing.
The "official" version used in that blog post decodes the JPG all in one go - so it's pretty memory hungry. With JPEG encoders that decode sections of the image at a time you can minimise the amount of RAM that needs to be allocated. It's also possible to stream the display data out to screen using DMA while the next chunk of image data is being decoded.
It's very easy to forget what a range of MCUs there are, from very puny, to very capable. For example the Espressif range of MCUs - which you'll find in all sorts of consumer products - are very powerful. Couple that with a lot of cheap SPI based display modules and you very quickly start wanting to show images.
You need to go back and read it again. I provide links to the relevant Espressif documents and in my next article I provide a simple example to get started. Would you rather have me copy the hundreds of pages of PDF into my blog post instead of providing a link?
I've also definitely seen it reference invented methods on APIs (that would have been very nice if they existed) - that no past or future version implemented.