It isn't, but in twenty years getting paid to write software I have far more regrets where I rewrote and shouldn't have, than where I should have rewritten and didn't.
If you're Google and you have people with these abilities kicking about, it's probably not a crazy investment to see what happens. We've got a HN story elsewhere in the list on post-quantum key agreement experiments in Chrome, again there's a fair chance this ends up going nowhere but if I was Google if throw a few resources at this just in case.
But on the whole I expect Fuchsia to quietly get deprecated while Linux lives on, even if there's lots to like about Fuchsia.
or run the entire business (when it works) without which the whole company would grind to a halt.
A rewrite made no sense to me since I'd end up maintaining version A alongside version B with B constantly lagging A unless I severely restricted the scope of B in which case it'd be an incomplete (though better written/more maintainable A).
Instead I went the isolate (not always easy), shim, rewrite, replace, remove shim approach.
It does feel a bit like spinning plates blindfold sometimes in the sense I'm always expect to hear a crash.
So far I've replaced the auth system, the reports generation system, refactored a chunk of the database, implemented an audit system, changed the language version, brought in proper dependency management, replaced a good chunk of the front end (jquery soup to Vue/Typescript), rewritten the software that controls two production units and implemented an API for that software so that it isn't calling directly into the production database.. and done it without any unplanned down time (though I'm still not sure how - mostly through extensive testing and spending a lot of time on planning each stage).
It's slower because I have to balance new features against refactor time but I have management buy-in and have kept it, mostly through been extremely clear about what I'm working on and what the benefits are and have built up some nice momentum in terms of deploying new stuff that fixes problems for users.
The really funny part is that even though I'm re-factoring ~40% of the time I'm deploying new features faster than previous dev who wrote the mess...because I spent the time fixing the foundations in places I knew I'd need for new features going forwards.
In my experience the second time leads to architecture astronautics... third time is when you get it right.
Althought in OS space one might argue that second generation of time sharing OSes (TOPS-10, ITS, MCP...) got more things right than are right in Unix and such.
For OS it means that you should pick one abstraction for process state and one abstraction for IO and in fact you can have same abstraction for both. In this view Plan 9 makes sense while modern Unix with files, sockets, AIO, signals, various pthread and IPC primitives and so on does not (not to mention the fact that on every practical POSIX implementation various such mechanisms are emulated in terms of other synchronisation mechanisms)