> Often safety and performance are characterized as a set of zero sum tradeoffs, yet often it's possible to find better tradeoffs who's holistic properties improve upon a previously seen "either or".
There is truth in this, but I'm not sure whether the reader can/should extrapolate this to larger situations (not that the author implied we should, but it was my first interpretation).
We know that in certain situations, borrow checking works really well and allows us to guarantee safety with minimal friction and no overhead.
But there are other cases where safety and performance _are_ in contention, and we must choose one or the other. Anyone who has been forced to satisfy the borrow checker by using a .clone(), using Rc, or refactoring objects into a hash map and referred to them by an ID (that must be hashed to exchange with a reference), has felt this contention. In https://verdagon.dev/blog/myth-zero-overhead-memory-safety, I concluded that there's no general approach that always has zero overhead, at least not yet.
So perhaps the best interpretation from this study is that often, for small enough programs/areas, there is no conflict between safety and performance.
For larger programs with more complex requirements and data interrelationships, the question becomes much more interesting.
> I see no reason why a straight port from Rust to C++ wouldn't have been possible while satisfying their requirements.
Like the author, I also don't see a reason for this, but I've never tried myself. I've always thought that with the restrict keyword, one could make any C++ as performant as any Rust code. Perhaps something else got in the way there.
It's my opinion that Rust's safety model is flawed by way of being too rudimentary. It can only guarantee safety against a subset of memory issues, and actively locks you out of using programmatic techniques which are often necessary in high-performance contexts without introducing unsafe code into your codebase. ATS[1] takes a different approach, much more broad in its ability to ensure safety, by enforcing a notion of logical safety in arbitrary contexts. It's essentially a cutting edge type system tacked onto C, what you see is what you get, and the safety constructs don't impose any structural restrictions on your code. Memory safe pointer arithmetic, page table manipulation, etc. All checked for correctness at compile time thanks to the type system. If you cannot prove your code works, it will not compile.
I don't actually care that much for ATS. It's a highly experimental and unwieldy. There are very silly design decisions sprinkled about and it's far more complex and kitchen-sink than it needs to be. A bizarre academic vision more than anything else, but one that has a lot of important points to make. To me, ATS represents what we should be looking for in a safe systems language much more than Rust.
I know the general reaction to the idea of languages that require you to write proofs is revulsion at what it requires out of the programmer. My response to this is simple; safe code must be written with time taken to sit down and think through the problem domain, mapping it out and working off of explicit proof. Without this, our efforts to create "safety" are akin to taping a tarp over a hole in the wall and calling it a repair. Safety is important, and that's why half-measures are unacceptable.
There is truth in this, but I'm not sure whether the reader can/should extrapolate this to larger situations (not that the author implied we should, but it was my first interpretation).
We know that in certain situations, borrow checking works really well and allows us to guarantee safety with minimal friction and no overhead.
But there are other cases where safety and performance _are_ in contention, and we must choose one or the other. Anyone who has been forced to satisfy the borrow checker by using a .clone(), using Rc, or refactoring objects into a hash map and referred to them by an ID (that must be hashed to exchange with a reference), has felt this contention. In https://verdagon.dev/blog/myth-zero-overhead-memory-safety, I concluded that there's no general approach that always has zero overhead, at least not yet.
So perhaps the best interpretation from this study is that often, for small enough programs/areas, there is no conflict between safety and performance.
For larger programs with more complex requirements and data interrelationships, the question becomes much more interesting.
> I see no reason why a straight port from Rust to C++ wouldn't have been possible while satisfying their requirements.
Like the author, I also don't see a reason for this, but I've never tried myself. I've always thought that with the restrict keyword, one could make any C++ as performant as any Rust code. Perhaps something else got in the way there.