For some more detail on how we secure downloading components, we implement a concept called verified execution [1]. We establish a chain of trust from:
* a hardware key (on hardware that supports it), which checks the signature of
* the bootloader, which has a key baked into it and verifies that each boot slot has a properly signed vbmeta structure. This vbmeta then contains a hash of the zircon kernel, and the merkle root for the user space system image blob.
* we boot up zircon, which eventually starts up blobfs, our content addressed file system. It then reads the system image from blobfs, and launches Component Manager and Package Cache (which implements a package filesystem on top of blobs).
* package cache gets launched with the system image merkle from vbmeta, which allows us to know which packages are part of the base package set.
* base packages are then launched upon demand.
This establishes a direct line of trust from the hardware key to the base packages.
For over the air updates and ephemerally resolved packages, we use The Update Framework [2] and Omaha [3] for our package repositories. Each entry contains the merkle root for the package metadata, which in turn bakes in the merkle roots for each blob in the packages. We bake in the public keys for TUF and Omaha into our system image. This allows us to indirectly verify from hardware up that we are fetching the correct software.
(At this point it's less "assume good faith" and more "halfheartedly check for the unlikely event that any good faith is present", but...)
How do you insure that the end user is able to replace the hardware key and bootloader with one of their own devising (and generally to tamper with arbitrary parts of system) without a remote vendor/employer/DRM firm being able to prevent or detect it?
I’d go further and say it’s not possible to fully implement async/await without compiler help.
I got really far with stateful, back in 2016 [1]. Stateful was an attempt to write a coroutine library in a proc macro, which generated state machines, as opposed to using os primitives like green threading. This was back before the rust community really started working in this space. I ended up extracting the type system from rustc to do much of the analysis, but it ultimately failed due to how difficult it was to output rust code that respected the borrow checker rules. I also didn’t have anything like the pinning system, so I couldn’t catch move issues either.
It was a much better idea to just implement this in the compiler.
This is great news! I'm glad Servo found a place to land. Are you planning on sticking with the MPL-2.0 license, or are you also considering relicensing as well?
> Closed-world ("sealed") traits. Rust's rules against private types in the public API are good civilization but they make it difficult to define pseudo-private traits like Mount that I want users to name but not implement or call into.
Rust actually supports sealed traits by using public traits in private modules. See this for how to use it, and how it works:
I'm aware of that workaround and do use it in `rust-fuse`[0], but I'm not satisfied for two reasons:
* It's not understood by rustdoc, so I have to manually document that the trait is sealed.
* It technically violates Rust's rules against private symbols in the public API, so a future version of rustc might deprecate or remove that functionality.
> * It technically violates Rust's rules against private symbols in the public API, so a future version of rustc might deprecate or remove that functionality.
It's public though, just not externally reachable, the private module doesn't make the trait private.
For example, if you wanted to, you could reexport the trait in the parent module (and I guess the drawback with the "sealed" pattern is accidentally doing that when it would be unsound do because of how you relied on it being "sealed", in unsafe code).
I don't think I've heard anything about plans to restrict anything based on reachability, and it would be massively backwards-incompatible so I doubt it would even be considered for an edition.
> a future version of rustc might deprecate or remove that functionality
Are you sure about this? My understanding is that stable Rust limits itself to compatibility breaks that are both rare and trivially worked around. (Like adding a new inherent method to a standard type that happens to have the same name as your trait method.) Even in a new edition, I think there's a very heavy leaning towards changes that can be automated by `cargo fix`. Removing this idiom seems like it would be much too big of a change. (The obvious automatic fix -- just inserting `pub` as needed -- would presumably make a bunch of currently safe APIs unsound.)
Original author of serde here. I call it sir-dee, but the current maintainer, dtolnay, calls it sear-day. But we really don't care how people pronounce it. It's really more of a neither / potato thing:
For some more detail on how we secure downloading components, we implement a concept called verified execution [1]. We establish a chain of trust from:
* a hardware key (on hardware that supports it), which checks the signature of
* the bootloader, which has a key baked into it and verifies that each boot slot has a properly signed vbmeta structure. This vbmeta then contains a hash of the zircon kernel, and the merkle root for the user space system image blob.
* we boot up zircon, which eventually starts up blobfs, our content addressed file system. It then reads the system image from blobfs, and launches Component Manager and Package Cache (which implements a package filesystem on top of blobs).
* package cache gets launched with the system image merkle from vbmeta, which allows us to know which packages are part of the base package set.
* base packages are then launched upon demand.
This establishes a direct line of trust from the hardware key to the base packages.
For over the air updates and ephemerally resolved packages, we use The Update Framework [2] and Omaha [3] for our package repositories. Each entry contains the merkle root for the package metadata, which in turn bakes in the merkle roots for each blob in the packages. We bake in the public keys for TUF and Omaha into our system image. This allows us to indirectly verify from hardware up that we are fetching the correct software.
[1]: https://fuchsia.dev/fuchsia-src/concepts/security/verified_e...
[2]: https://theupdateframework.io/
[3]: https://chromium.googlesource.com/chromium/src.git/+/master/...