You are arguing that no language X is safer than writing program manually in Y when program in X is compiled to Y. Because compiler from X to Y may have bugs.
Therefore no code written in Rust (X) executed on x86 CPU (Y) is safer than manually written x86 assemby, because Rust compiler (and LLVM) may have errors.
And well, we can actually go deeper. There is CPU frontend that is generating micro code, which may have bugs. There is also CPU backend which is executing micro code, which also may have bugs. All in all there is no hope in programming. There might be bugs everywhere so you can never be sure what your program does.
That's not what I'm saying. I'm saying "rewrite it in Rust (or whatever)" isn't some silver bullet that fixes security problems. It's always about assessing risk -- both risk of security issues as well as risk of upsetting your users, etc. Basically exactly what the article says.
> Either way, the idea that you can write code in a safe language and compile to C to eliminate the type of bugs that C allows isn't true.
Is a bit different statement than:
> I'm saying "rewrite it in Rust (or whatever)" isn't some silver bullet that fixes security problems.
The first one is wrong, the second one is true.
Using a higher level language rules out some classes of programming errors which are possible in lower level languages. The fact that compilers have bugs does little to diminish those gains.
Semantics of Haskell does not allow to express program that generates double free [0]. Perhaps one of the compilers will compile some Haskell code to binary that frees memory twice. However, this bug in compiler is far more less likely that a programmer making this mistake in C. Whats more when this bug in compiler is detected and fixed. The problem can be fixed in all affected code bases without need to change the original source code. Thus chances of bugs are lower.
Nobody really argues that Rust (or OCaml, or Haskell, or whatever) is a silver bullet, i.e. solution to all problems that will miraculously make programmers produce no bugs at all. Obviously we will have software bugs even with most restrictive languages. No amount of formal proofs will save us form misunderstanding specifications or making typos. And then again we will also have bugs in implementation of those high level abstractions.
And for the record I am really annoyed with movement to rewrite everything in Rust.
[0] Yes, you can call free through FFI with whatever arguments you like, as many times as you like. But for sake of brevity let's assume this is not how you write your everyday Haskell.
The hope is writing a formal description of required architecture functionality (formal proof) and then validating the proof. Not 100% safe against non deterministic issues or very complex but good against most others.
Therefore no code written in Rust (X) executed on x86 CPU (Y) is safer than manually written x86 assemby, because Rust compiler (and LLVM) may have errors.
And well, we can actually go deeper. There is CPU frontend that is generating micro code, which may have bugs. There is also CPU backend which is executing micro code, which also may have bugs. All in all there is no hope in programming. There might be bugs everywhere so you can never be sure what your program does.