Hacker News new | past | comments | ask | show | jobs | submit login

you should read mythical man month...



I know about mythical man month, but this is a good counter-example: a massive project that breaks up into many small independent pieces: an OS kernel, drivers, a browser (with a JavaScript compiler), a Java VM, all the applications (each one developed independently). You can have separate teams working on each of those.

EDIT: Ubuntu has about 38k amd64 packages in total [1]. Those are mostly developed by independent teams, so they could be re-developed in parallel. While Android isn't that big, it still is broken up into many sub-components, along with a lot of external dependencies [2].

1 - http://askubuntu.com/questions/111753/how-many-i386-and-amd6...

2 - https://android.googlesource.com/


You realize the mythical man month is specifically about building a massive project - a computer operating system, with all of those 'independent' parts and 'small pieces'?


Yes, but it doesn't say that it can't be done by a large team. If you break down a project the right way, it can be parallelized.

I think a good specification helps a lot. For example, integration is much easier if the OS kernel and C library are coded to POSIX standards, than to whatever the project itself chooses. Same thing applies for other standards, like programming languages (Java, JavaScript). You can have a JVM team implement the VM separately, then come in at the end and plug it into the big system.

Also, what the MMM says regarding big projects is that "adding manpower to a late software project makes it later", not that it's never useful. As a counter-example, let's look at Android. Are you saying that all of Android was written by a small team of 10 people?


What if the "break down" process costs more time than sequential development?

And I can tell you even tiniest pieces are totally related to one another. E.g. Render a webpage seems totally independent, but it's not. You'll need customizable font, you'll have to write your own font render engine or used shared library with OS GUI, then you have language charset encoding problems with is related with OS environs, then you have Unicode problem which down to the deepest path of OS kernel (hint: file system)

Even rendering a datetime in Javascript requires locale date time settings from OS. Then you have NTP to sync them and such, it's a total clusterfuck.


> And I can tell you even tiniest pieces are totally related to one another. E.g. Render a webpage seems totally independent, but it's not. You'll need customizable font, you'll have to write your own font render engine or used shared library with OS GUI, then you have language charset encoding problems with is related with OS environs, then you have Unicode problem which down to the deepest path of OS kernel (hint: file system)

At this point in time, all of these requirements are mostly known (based on humanity's collective experience with software). Why wouldn't you incorporate them into the initial design (for example, build the OS with Unicode support from the start)?

I have some counter-examples. Let's say you're building the graphics subsystem, targeting x86 hardware. You can have 3 teams working on Intel, AMD and NVIDIA drivers simultaneously, since they don't really overlap (however, they all do need to communicate with the people writing the driver API).

Similarly, you can split up some of the compiler work. If you're targeting x86, ARM and MIPS, those backends can be written (mostly) in parallel.


> you incorporate them into the initial design (for example, build the OS with Unicode support from the start)?

Clever ass people tried that. That's why system using UCS2 as core (Windows/Javascript) sucks at handling iOS Emoji icons, and MySQL can only handle 3 bytes UTF8 by default (unless utf8mb4 explicitly). The reason is that UCS2 was "good enough" during the designing stage of those systems.

Even the most clever ass design on the planet can not cover all cases. You have make trade-offs at some point.

> You can have 3 teams working on Intel, AMD and NVIDIA drivers simultaneously, since they don't really overlap

You are so wrong this time. Do you know the power-saving technology allows you to switch dedicated GPU to Intel integrated GPU? Well that driver support sucks in Linux because of people like you think "well how on earth would Intel and nVidia drivers overlap?". Turns out it's a huge difference between you can use your laptop for 10 hours vs 3 hours.

http://en.wikipedia.org/wiki/Nvidia_Optimus#GNU.2FLinux_supp...

In general, people without deep software engineering experience make super clever-ass decisions and fail spectacularly. Because building software projects is absolutely not like assembly a car with so many standardized parts and modules. Another metaphor is that in physics if you have a problem with lightening, you can't change how the sun works. In software engineering you can. Read the book The Mythical Man Month, as well as this one Dreaming in Code. You can see even experienced developers fail miserably at not-so-large software projects


The interfaces of a unix-like OS are better understood now than the interfaces of a second-system mainframe OS ("this time we'll get everything right for sure") in the 1960s.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: