I can't give you an exact cost and time, but what I can do is give you relative cost and time. "This is easy, this is easy but will take a while (low variance), this is hard (relatively long time, high variance.) -- so that's giving you your estimate and your tolerances.
Further, I can work with you to figure out what portion of each project will give you the biggest bang for your buck. So you can say, "I want to spend x00,000 dollars." -- and I come back and give you a demo each week that you can play around with. At any point, you can say, "This is enough to generate value for me, let's move to the next thing for a while."
Further still, I can build it with enough test coverages and good design practices that it will be possible to extend upon the design at a later point without scrapping everything.
On the other hand, you can spend half of that x00,000 developing a specification. If detailed enough, I can give you a very low variance estimate of how long it will take. However, you won't know if it will actually meet your needs until you see it. You won't know about problems in what you really need until it's too late, and you'll end up spending more money in the end.
You have successfully solved the problem posed, but it was not well-stated. In a real-world scenario, it is important not just to identify which of the 10 ideas is most promising, but to defeat the null hypothesis that the programmers should be fired and no idea should be pursued. After all, to choose the best of 10 bad ideas is to have failed as an economic enterprise.
In order for a business to be investable, it is necessary to demonstrate that it will be profitable to an investor. (And of course all businesses need investment, whether that is VC money or just a lone developer's part-time effort.)
We have avoided this problem because in the early days of software, most software was profitable. But as the software industry has grown, this is proportionally less true. And I seriously question in 2014 whether more LOC are committed each day to "black" or "red" projects.
> to defeat the null hypothesis that the programmers should be fired and no idea should be pursued
Now I'm imagining a website like oDesk or ELance, but which requires employers to "defeat the null hypothesis" for any job contract they want to post. What a wonderful world that would be.
There are two potential problems posed in the scenario:
The first, as I believe you presume, is that the business is looking to develop a revenue-generating product. I agree that Continuous delivery, quick iterations, late-binding requirements (a la Behavioral Driven Design), and rigorous testing are not a strong candidate for this problem within the context of big business, and I can point to a handful of systems where this approach did not work well.
However, the second interpretation is that a big business has ten internal projects to increase their efficiency. In this case, return on investment is relatively easy to calculate, but requirements are usually nebulous and thus estimates are by necessity high-variance.
I think the subset of agile techniques I described (as well as cross-functional teams, high customer involvement, and exploratory testing) are well suited to this problem because they, as a collection, allow a business to receive value quickly and allow the business to "fail fast" in lean startup parlance.
Are you building embedded software? The agile approaches aren't, collectively, well suited for that domain (though I'd argue pair programming and cross-functional teams aren't a bad idea here.) It's not going to work for games. But for internal software and <10 client software, I argue that this subset reduces risk and increases the potential of high ROI.
___
Now, as for a few things you mention.
* The programmers are not responsible for calculating return on investment. However, does an extensive requirements gathering mission with months of meetings going to be cheaper when the costs of meetings (in lost productivity and wages) is calculated? In my experience, these documents end up being works of fiction, on which aggressive estimates are produced upon pressure of management. Then, the costs overrun (especially considering that nobody accounted for the lost productivity of the customer) because the estimate was based on business need rather than reality. Then, features are dropped and technical debt accrues, leading to a system that is hard to maintain and is scrapped after five years of frustration when the customer starts again. At least, this time they know what they don't want. I don't think that works.
* I think that is the situation that leads to the "red" projects that you mention. However, you speak about early days of software where most software was profitable. However, I've seen and heard the horror stories of multi-million (and Billion!) dollar projects from the 80s and 90s. While http://calleam.com/WTPF/?page_id=1445 mentions recent studies, I remember seeing studies that 2/3rds of projects fail in the 80s and 90s as well.
So, most software wasn't profitable then (at least in terms of internal projects) and they aren't now.
However, I'd argue that the collection of techniques above allow a project to fail faster and cheaper if managed correctly.
If we were to put it into a methodology, here it is:
1. Determine return on investment if X can be automated. How many hours are saved? How many fewer people are needed to do the same job? What percentage better forecast can this project create (estimated)
2. Use some sort of behavior driven development to get a basic sense of the complexity of the system needed. If the complexity and ROI don't match up, stop here. You've lost a minimum amount of money.
3. Start prioritizing which pieces would provide the most business value for the least complexity cost. Build the first of these pieces, elaborating on the BDD done in step 2. Write this with good tests, so that the system can be extended with a reduced fear of later regression.
4. Demo to customer. Does this fit their needs? Is additional complexity exposed? Does this provide value? This is the next cutoff point. If the system appears to be more complex than thought, or if the team is unable to provide the business value anticipated, stop the project and re-evaluate or shelve.
Everyone gets together and does a retrospective with a question to continue, an honest look at what worked and what didn't, and a set of things to try to fix the problems of the first iteration.
5. Repeat steps 3 and 4 until the software either sufficiently meets the business need or the project is ended early due to a fatal flaw. Always be willing to ask if this is the appropriate time to declare the system "done for now."
___
Now, this won't work in most major corporations for a variety of political reasons, but to me this seems to be a better system than the traditional corporate project structure from my experience in both successful and unsuccessful IT projects.
Yeah that was a little bit optimistic about the powers of Agile for me, but I think the thing that Agile gets right is that we all have to operate under the uncertainty of the scope of software projects. In that sense it's not a methodology for estimating the complexity of a software project but a way to start working without that.
Perhaps the biggest reason this keeps causing problems is that companies have no good way of dealing with change. If you expect to finish any project that you start then you need to know more about that project than is realistic at the time you start it. A canceled project is a big failure for most employees and not something they want to happen to their careers.
However, until you establish reality, all the estimates in the world aren't going to help much. And most customers cannot estimate reality until they are actually in the process.
I'm not saying Agile is a silver bullet -- it can go wrong in many ways, and it's not appropriate for every situatoin. However, it's the best we have for its niche.
Further, I can work with you to figure out what portion of each project will give you the biggest bang for your buck. So you can say, "I want to spend x00,000 dollars." -- and I come back and give you a demo each week that you can play around with. At any point, you can say, "This is enough to generate value for me, let's move to the next thing for a while."
Further still, I can build it with enough test coverages and good design practices that it will be possible to extend upon the design at a later point without scrapping everything.
On the other hand, you can spend half of that x00,000 developing a specification. If detailed enough, I can give you a very low variance estimate of how long it will take. However, you won't know if it will actually meet your needs until you see it. You won't know about problems in what you really need until it's too late, and you'll end up spending more money in the end.
That's the message that should go out.