Hacker Newsnew | past | comments | ask | show | jobs | submit | zorgmonkey's commentslogin

Their have been many vulnerabilities in TrustZone implementations and both Google and Apple now use separate secure element chips. In Apple's case they put the secure element as part of their main SoC, but on devices where that wasn't designed in house like Intel they had the T2 Security Chip. On all Pixel devices I'm pretty sure the Titan has been a separate chip (at least since they started including it at all).

So yes incorporating a separate secure element\TPM chip into a design is probably more secure, but ultimately the right call will always depend on your threat model.


Here's an excerpt about the anti-rollback feature from Nvidia's docs on how the Tegra X1 SoC in the switch 1 boots [0] (called Tegra210 in the document)

> By default, the boot ROM will only consider bootloader entries with a version field that matches the version field of the first entry, and will stop iterating through the entries is a mismatch is found. The intent is to ensure that if some subset of the bootloader entries are upgraded, and hence the version field of their entries is modified, then the boot ROM will only boot the most recent version of the bootloader. This prevents an accidental rollback to an earlier version of the bootloader in the face of boot memory read errors, corruption, or tampering. Observe that this relies on upgraded bootloader entries being placed contiguously at the start of the array.

[0] https://http.download.nvidia.com/tegra-public-appnotes/tegra...


It isn't that wild; the typical name for it is anti-rollback, and you probably have at least one device that implements it. Most Android devices have anti-rollback efuses to prevent installing older versions of the bootchain\bootloader; they might still allow you to downgrade the OS (depends on the vendor, if memory serves). Instead of using efuse counters, anti-rollback counters can also be implemented by Replay Protected Memory Block (RPMB), which is implemented by many flash storage (eMMC often supports RPMB, but other storage types can as well). It is possible to implement anti-rollback mechanisms on x86_64 by utilizing a TPM [0], but as far as I know, only Chrome OS does this.

[0]: https://www.chromium.org/developers/design-documents/tpm-usa...


It looks very likely chromium will be using jxl-rs crate for this feature [0]. My personal suspicion is that they've just been waiting for it to good enough to integrate and they didn't want to promise anything until it was ready (hence the long silence).

[0] https://issues.chromium.org/issues/40168998#comment507


That was Mozilla's stance. Google was thoroughly hostile towards it. They closed the original issue citing a lack of interest among users, despite the users themselves complaining loudly against it. The only thing I'm not sure about is why they decided to reopen it. They may have decided that they didn't need this much bad PR. Or someone inside may have been annoyed by it just as much as we are.

PS: I'm a bit too sleepy to search for the original discussion. Apologies for not linking it here.


> The only thing I'm not sure about is why they decided to reopen it.

It's almost certainly due to the PDF Association adding JPEG XL as a supported image format to the ISO standard for PDFs; considering Google's 180 on JPEG XL support came just a few days after the PDF Association's announcement.


That would make sense, since they would then need support for JXL for the embedded PDF viewer anyway. Unless they want it to choke on valid PDFs that include JXL images.


I see! Thanks for pointing out this very interesting correlation. So we got something better only because someone else equally influential forced their hand. Otherwise the users be damned, for all they care, it seems.


I have been relentlessly shilling JPEG-XL's technological superiority especially against their joke of an alternative and a stain on the Internet they call WebP

https://www.reddit.com/r/DataHoarder/comments/1b30f8h/image_...

https://youtu.be/w7UDJUCMTng


Some of the same people developed both. Pretty sure Jyrki Alakuijala for example led the development of lossless mode for both WebP and JPEG-XL.


Thank you!

I designed WebP lossless alone. The rest of the WebP folks added a RIFF header and an artificial size limitation (16383x16383) to match with the size limitation of lossy WebP.

In JPEG XL I believe I had more influence on the lossy ("VarDCT") format than anyone else, but stayed relatively far from the lossless part (except the delta palette, two predictors, 2d lz-codes, and a few other smaller things). I believe Jon and Luca had most influence there.


It wasn't just a blatant lie for lack of interest, they also went out their way to benchmark it and somehow present it as inferior to AVIF.


IIRC they benchmarked it as "not much better" than AVIF, not inferior.


That library had a hiatus with zero commits of over 1.5 years until recently iirc.

That this is working out is a combination of wishful thinking and getting lucky.


"Code frequency" for jxl-rs shows no activity from Aug 2021 to Aug 2024, then steady work with a couple of spurts. That's both a longer hiatus and a longer period of subsequent activity (a year+ ago isn't "recently" in my book.) What data have you based your observation on?


my fallible memory of roughly the same sources


Pebble watches run on Cortex-M microcontrollers which have less than 1MB of flash storage and RAM, I like Kotlin multiplatform but getting it to run on them is extremely unlikely. I assume that for the foreseeable future Pebble apps will be only written in languages which are traditionally used for MCUs like Rust and C\C++


Calling rust traditional is a bit of a stretch, while it is being done it's pretty much bleeding edge (though if you do not use any of the manufacturer supplied code and libraries to begin with you should be fine).


It is honestly refreshing to see constraints like this again.

In my cloud infrastructure work (C++), we have gotten lazy. We bloat our containers because 'RAM is cheap'. Seeing a system designed to fit into 1MB reminds me that performance engineering used to be about efficiency, not just throwing more hardware at the problem.


I find this a little funny because as a firmware engineer the project I regularly work on only has 512kb of flash. This doesn't stop sales from constent new feature requests.

Embedded is definitely a fun balance of what we could do and how much we can do.


I found a link to the PDF that seems to work https://data.ntsb.gov/carol-repgen/api/Aviation/ReportMain/G...

Also in case that link stops working I got it from this page https://www.ntsb.gov/investigations/Pages/DCA26MA024.aspx

EDIT: nevermind immediately after posting this comment it is now giving a 403 error


Your first link is working fine


Yeah working again for me too, they're probably having some sort of server problems


They still use gerrit, that site is a code search UI that they have that is also a very nice way to navigate the code.


Rust moves are a memcpy where the source becomes effectively unitialized after the move (that is say it is undefined to access it after the move). The copies are often optimized by the compiler but it isn't guaranteed.

This actually caused some issues with rust in the kernel because moving large structs could cause you to run out the small amount of stack space availabe on kernel threads (they only allocate 8-16KB of stack compared to a typical 8MB for a userspace thread). The pinned-init crate is how they ended solving this [1].

[1] https://crates.io/crates/pinned-init


With enough effort you could definitely do it. Just remember it is a device that came out in 2006 and it has 256MB of system RAM and 256MB of VRAM, at best you're running a quite small model after a lot work trying to port some inference code to CELL processors. Honestly it does sound a cool excuse to write code for the CELL processors, but don't expect amazing performance or anything.


This is a very important point, careful use of GCs for a special subset of allocations that say have tricky lifetimes for some reason and aren't performance critical could have a much smaller impact on overall application performance than people might otherwise expect.


Yeah, and it's even better if you have a GC where you can control when the collection phase happens.

E.g. in a game you can force collection to run between frames, potentially even picking which frames it runs on based on how much time you have. I don't know if that's a good strategy, but it's an example of the type of thing you can do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: