> And I can tell you even tiniest pieces are totally related to one another. E.g. Render a webpage seems totally independent, but it's not. You'll need customizable font, you'll have to write your own font render engine or used shared library with OS GUI, then you have language charset encoding problems with is related with OS environs, then you have Unicode problem which down to the deepest path of OS kernel (hint: file system)
At this point in time, all of these requirements are mostly known (based on humanity's collective experience with software). Why wouldn't you incorporate them into the initial design (for example, build the OS with Unicode support from the start)?
I have some counter-examples. Let's say you're building the graphics subsystem, targeting x86 hardware. You can have 3 teams working on Intel, AMD and NVIDIA drivers simultaneously, since they don't really overlap (however, they all do need to communicate with the people writing the driver API).
Similarly, you can split up some of the compiler work. If you're targeting x86, ARM and MIPS, those backends can be written (mostly) in parallel.
> you incorporate them into the initial design (for example, build the OS with Unicode support from the start)?
Clever ass people tried that. That's why system using UCS2 as core (Windows/Javascript) sucks at handling iOS Emoji icons, and MySQL can only handle 3 bytes UTF8 by default (unless utf8mb4 explicitly). The reason is that UCS2 was "good enough" during the designing stage of those systems.
Even the most clever ass design on the planet can not cover all cases. You have make trade-offs at some point.
> You can have 3 teams working on Intel, AMD and NVIDIA drivers simultaneously, since they don't really overlap
You are so wrong this time. Do you know the power-saving technology allows you to switch dedicated GPU to Intel integrated GPU? Well that driver support sucks in Linux because of people like you think "well how on earth would Intel and nVidia drivers overlap?". Turns out it's a huge difference between you can use your laptop for 10 hours vs 3 hours.
In general, people without deep software engineering experience make super clever-ass decisions and fail spectacularly. Because building software projects is absolutely not like assembly a car with so many standardized parts and modules. Another metaphor is that in physics if you have a problem with lightening, you can't change how the sun works. In software engineering you can. Read the book The Mythical Man Month, as well as this one Dreaming in Code. You can see even experienced developers fail miserably at not-so-large software projects
At this point in time, all of these requirements are mostly known (based on humanity's collective experience with software). Why wouldn't you incorporate them into the initial design (for example, build the OS with Unicode support from the start)?
I have some counter-examples. Let's say you're building the graphics subsystem, targeting x86 hardware. You can have 3 teams working on Intel, AMD and NVIDIA drivers simultaneously, since they don't really overlap (however, they all do need to communicate with the people writing the driver API).
Similarly, you can split up some of the compiler work. If you're targeting x86, ARM and MIPS, those backends can be written (mostly) in parallel.