I agree that it would be great if the ecosystem was a bit slower to use every new version and it does seem like things are beginning to tend in that direction as many foundational crates have begun declaring MSRVs of !LATEST.
However I don't think the pace of updates really changes anything in terms of tool chain security. If Rust decided to go to a 36 week release cycle, each release would just have 6x as much stuff in it. If you can't keep up reviewing N changes in a 6 week release cycle, moving to a 6*X release cycle will not help you review N*X changes.
I also agree that there is too much churn in the Rust ecosystem and that we should try and slow things down in the coming years. ntpd-rs also does this: our MSRV is 1.70 right now (that was released over a year ago) and we test our code on CI against this version (as well as the current stable release). And we go a little further. Using the `direct-minimal-versions` (nightly only right now unfortunately) flag we downgrade our dependencies to the minimal ones we've specified in our `Cargo.toml` and test against those dependencies as well, as well as the latest dependencies specified in `Cargo.lock` which we update regularly. This allows us to at least partially verify that we still work against old versions of our dependencies, allowing our upstream packagers to more easily match their packages against our own. Of course we all should update to newer versions whenever possible, but sometimes that is hard to do (especially for package maintainers in distributions such as Fedora and Debian, who have to struggle with so many packages at the same time) and we shouldn't create unnecessary work when its not needed. Hopefully this is our way of helping the ecosystem slow down a little and focus more on security and functionality and less on redoing the same thing all over again every year because of some shiny new feature.
Serious question: why does it need to change so much and so often? I know nothing about rust development, so I'm curious about why it's worlds different from the development of other toolchains.
Let us be clear that the notion of "change" being referred to here is forward compatibility, not backward compatibility. The user is commenting on the fact that Rust library authors make use of new features as they become available, and as a result in order to compile Rust code you will often need a recent version of the compiler, or otherwise you will need to find an older version of the library in question.
In addition Rust was born from Mozilla and imitates Firefox's rapid release schedule of one release per six weeks. This does not mean that Rust releases are substantial or that they are traumatizing, only that they are frequent. The contract with users is that Rust releases must be painless so as to not fatigue users and discourage them from upgrading. The success of this painless upgrade strategy is proved by the fact that library authors are so quick to upgrade, as mentioned.
This is in contrast to other languages where historically a new version of their compiler might only be released as infrequently as once per three years. It seems that these languages have begun taking queues from Rust as even Java now releases once per six months.
Consider that there's still a good amount of work being done on gcc & clang's C++ frontend despite how old these languages are. Wouldn't it stand to reason that Rust, a comparatively very new language, would have new features and compiler improvements added at an even faster pace?
I suspect if you were to look at other reasonably popular languages that are of the same era, you'll see a similar level of changes.
What an interesting coincidence that all three of these accounts were created within 15 minutes of each other. I'm sure all "three" users are being "intellectually honest" here.
Garbage collection seems like a significant no-go for many safety critical systems. Other oddities like 31 and 63 bit numbers probably aren't a big deal but are still weird. "Mature" seems like a stretch when OCaml only got proper multi threading 1.5 years ago.
"outdated article" the commit tested is 3 months old.
This is a standard V community tactic: all negative feedback is "bashing", anything older than a week is "outdated", anything up to date shouldn't have been written and posted on the issue tracker to be ignored instead.
Stop trying to control everyone else's speech and just work on fixing the long list of issues folks already took the time to report.
> Everything described here is correct for the b66447cf11318d5499bd2d797b97b0b3d98c3063 commit. This is a summary of my experience with the language over 6 months + information that I found on Discord while I was writing this article.
That's not really accurate. Ojeda is a long time kernel contributor and so are many of the folks writing drivers. Maintainers of various subsystems are also particularly interested.
Not everyone is of course, but hardly "just (Rust) advocates" like you suggest.
By design, Rust requires unsafe code to implement any non-trivial data structures (except trivial POD types). This applies to both Rust standard library, and third-party crates.
By contrast, thanks to the VM and the GC, C# allows to implement very complicated data structures without any unsafe code or unmanaged interop. The standard library is also implemented in idiomatic memory-safe subset of the language. For example, here’s the hash map: https://source.dot.net/#System.Private.CoreLib/src/libraries...
> falls to prevent null pointer exceptions and modified collection exceptions
Yes indeed, but these exceptions are very unlikely to cause security bugs in the software.
The entire JIT, garbage collector and most of the C#'s VM are all implemented in C++. This has caused various issues in the past which are exploitable from managed code. The amount of unsafe code used to implement C# vastly outweighs the amount in Rust's standard library.
If you are going that way, Rust's reference compiler is dependent on LLVM, fully written in C++, and the C++ semantics of bitcode have broken Rust's code generation multiple times, forcing regressions and newer compiler releases with desactivated optimization features.
Also plenty of crates are bindings to C and C++ libraries with nice unsafe blocks.
Yeah, doesn't make Rust's dependency on C++ go away for its safety.
The point is the "look at what I say, not what I do", when talking about safe languages and dependencies into C and C++ libraries and compiler toolchains.
It has to do with yours incorrect assertion that using C++ in the runtime is a disadvantage for C# in regards to Rust, which equally depends on C++, in both of its compilers toolchains, rustc and gcc-rs.
When Rust gets fully bootstrapped in self hosted toolchain you'll have a point.
The "link" is just the repos rather than asking AI to hallucinate an answer. Rust's repo contains 2.2M LOC. The dotnet runtime contains 1.5M lines of C++.
Now if we remove in tree tests from the totals, we arrive at 1.5M lines of C++ (most tests are written in C# as you would expect) and 1.7M lines of Rust.
However, this does not exclude safe Rust code. I don't have a tool off hand that can provide a precise count of lines of unsafe code but we can get some general estimates. There are 1958 instances of "unsafe fn" out of 103,205 instances of "fn ". Further there are 11,545 instances of "unsafe " in the Rust repo while there are 10,768 instances of "unsafe " in the runtime repo.
Given that unsafe functions comprise less than 2% of all functions in the Rust repo, I think my claims are reasonable.
I don't find it particularly surprising. D uses a garbage collector while C, C++ and Rust do not. D's GC can be disabled but that isn't that useful when most D code including the standard library until just a few years ago were not written with that in mind.
D is much more closely a competitor of C# than it is C++. D has a few nice features like advanced compile time programming but the actual nuts and bolts that Staff engineering management looks isn't really solid. D's GC is a design straight out of the 80's. Dmd has good compiler throughout but code quality isn't very good. Ldc is much better but compile times are much longer.
Adopting languages at FAANG beyond a single team just yolo deploying them to production requires integrating dozens of engineering systems for everything from post mortem debugging to live profiling to authentication systems. The cost to do this is in the order of tens of millions of dollars.
D just isn't suitable as a C or C++ replacement in the places that actually require it and the cost to enable it in large companies isn't worth it for the incremental improvement it does offer in some areas.