The problem with your point is that it's single ended, because the scale and scope of deployed Rust or Go software has not matched that of C/C++ software. We also don't have particularly good data on how many memory safety bugs are in a project with good deployment controls versus ones that aren't.
We also don't know how well tested any of the vulnerable Microsoft code actually was and so I'd be wary of drawing any broad conclusions across languages from that simple statistic. It's also likely self reported and not likely to be rigorously gathered for this type of analysis.
The fact that there's such a large difference between C/C++ projects with respect to historically discovered vulnerabilities to me suggests that it can't be down to the language but how the project deployments are engineered.
You're one unnoticed checkin of an "unsafe" construct in any of these languages away from having the dreaded memory safety vulnerability introduce itself into your project. Even worse, you could have a crate that has an unsafe block you didn't previously call, but a new checkin now calls this extant and disregarded method. So, what do you do? The language hasn't done anything for you here. Use an analysis and/or fuzzing tool? So, how are we anywhere different because of the language?
Even for Go, a language I love quite a bit, if you forget to synchronize shared maps with simultaneous reads and writes you're in for a panic, and possibly real safety bugs. The GC and the fact that "unsafe.Pointer" are "slightly hard" to use isn't a huge attribute as it leaves entire classes of bugs on the floor with the tines pointed straight up.
I think there is a strong culture in the Rust community to use `unsafe` carefully, somehow fostered by the overt advertisement, features, and/or design philosophy of Rust. The infamous actix-web debacle suggests that Rust users tend to be overzealous about avoiding `unsafe`, even. I think the design philosophy plays a big part to develop the culture: `unsafe` is less seen as the thing that only super hacker wizards use, and more as a tool that should be used judiciously but still out in the open.
So I suppose it's not literally just the Rust language itself, but given the context of Rust's development, there seems to be an intertwined culture that was more likely to arise than not.
The problem with your point is that it's single ended, because the scale and scope of deployed Rust or Go software has not matched that of C/C++ software. We also don't have particularly good data on how many memory safety bugs are in a project with good deployment controls versus ones that aren't.
We also don't know how well tested any of the vulnerable Microsoft code actually was and so I'd be wary of drawing any broad conclusions across languages from that simple statistic. It's also likely self reported and not likely to be rigorously gathered for this type of analysis.
The fact that there's such a large difference between C/C++ projects with respect to historically discovered vulnerabilities to me suggests that it can't be down to the language but how the project deployments are engineered.
You're one unnoticed checkin of an "unsafe" construct in any of these languages away from having the dreaded memory safety vulnerability introduce itself into your project. Even worse, you could have a crate that has an unsafe block you didn't previously call, but a new checkin now calls this extant and disregarded method. So, what do you do? The language hasn't done anything for you here. Use an analysis and/or fuzzing tool? So, how are we anywhere different because of the language?
Even for Go, a language I love quite a bit, if you forget to synchronize shared maps with simultaneous reads and writes you're in for a panic, and possibly real safety bugs. The GC and the fact that "unsafe.Pointer" are "slightly hard" to use isn't a huge attribute as it leaves entire classes of bugs on the floor with the tines pointed straight up.