They're also explicitly tracking new code by language, and talking about memory safety vulnerabilities per year, and they also link to [1] which talks about how most memory safety bugs they get are in new code.
It's also useful to look at the "rate of bugs per line of new code" because even stablished, long stable projects have code churn. Rare is the project that is unchancged, frozen in bakelite, and any mild refactor can introduce regressions or affect relied upon implicit invariants.
(the person who posted the article here isn't the author (me))
The bugs in part 1 all around using higher ranked trait bounds. I'd disagree with the characterization that they're "feature requests": five years ago, yes, I would agree, but this entire area of the compiler needs to be bug-free for an upcoming feature (GATs) anyway, and indeed, the issues I found were often fixed by people working on fixing related GAT bugs. Ultimately, my use of higher ranked trait bounds is an attempt to emulate some of what GATs get you in stable Rust, so it's not surprising that the bugs are in the same area of the code.
If I understand correctly, GATs are about being able to change compiler assumptions about data types in user code? Generics acting on the compiler?
How is that helpful? Is it that you're trying to skip all the language boilerplate around creating objects? Are there any risks/footguns to that approach?
Generic type-driven code makes my head hurt, let me know if I'm somewhat close
GATs allow traits to abstract over associated types that are themselves to some degree abstract. In this case, it's necessary to do the relevant trait machinery around lifetime transformation since we need to be able to talk about "a replaceable lifetime of a type" in a generic way.
I think most of these companies have had a pretty large investment, they're just not really open about it in many cases. So yeah, it's a pretty visible signal, but the investments they had already were much larger ones. A team of core developers is a pretty small investment compared to having a ton of teams all over the place, which most of them had already.
Many of us are looking for (or have found) dayjobs, but it's possible some folks may be open to contracting work on servo/etc. But nobody is being paid to work on servo right now.
We spent a significant amount of effort in the last year and a half working on a redesigned modular/parallel layout subsystem.
The VR focus was because it was a good way to get servo out to end users early without needing to be fully web compat -- WebXR doesn't require complex layout. We didn't drop our focus on full web compat during this, but full web compat has always been a more long term goal given how complex the web platform is.
WebXR is also completely inane. What users are you "shipping" to? A tenth of a tenth of a tenth of a tenth of a percent of internet users?
You have a massive effort ahead of you, and you won't get there by chasing distractions. You need to have a singular focus if you're going to accomplish this goal.
Now they are free to spend their (free) time on what they want. When most of the core contributors where payed by Mozilla, they could not chose to eg. "focus on web compat", hence they went on something you consider useless, but that kept the project alive. That allowed a few other things to be done like the re-write of the parallel layout.
Of course we can't know for sure what would have happened if they refused to work on VR, but my gut feeling is that this would not have helped the project.
They're not interested in "[providing] an independent, modular, embeddable web engine," they're interested in writing software in Rust and having their name associated with a Mozilla/Linux Foundation project. Go look at their governance.[1]
Their webpage tells you what they really care about, and it isn't embedding.
Servo was never tightly integrated with a sizeable browser project. It shared some components with Firefox, but the only time Servo itself was inside an actual browser release was Firefox Reality for AR. Which still exists, though I'm not sure what the future of development for it will look like.
I think Valve and Microsoft are better served (for different reasons) by there being only one browser engine, the one they use. It's users that really gain from the competition. Companies may gain in the long run because of better performance and features (but I think most the low hanging fruit for performance has been picked), but in the short run it's a lot of risk just to help everyone, not yourself, which isn't where public companies shine.
Most of the graphs here are about new code.
[1]: https://security.googleblog.com/2021/04/rust-in-android-plat...