Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Almost always, your customer doesn't know what they want - they think they do, but they don't

I honestly don't understand this statement. Could you provide an example to elaborate on this point? I have read it at so many places but it sound more like the fashionable thing to say.

The "need" have a starting point, may be a very high level problem statement, and then through analysis, back and forth question answering, you discover the need in more concrete terms instead of abstract terms where you started with.



It is very easy actually. We often a business uses Excel for their processes and flow, that is they have excel sheets that they edit, copy, send around and merge back. Now, we all know the problems that exists with this approach, and even the customer knows that. But that doesn't mean they can perfectly describe how any application that gets rid of those excel files should look like.


But isn't the problem statement itself what you need from customer? My idea is that the customer approached you with the problem statement and their expectations is that you would provide possible solutions to it instead of the customer defining the solution exactly and just asking you to "code" it.


You describe the requirements analysis step which itself is a big issue with traditional waterfall projects. You bring someone (or a team) in from either outside the company or inside the company. At that point you are already falling victim to the fallacy that you think you can fully and exhaustively analyse the full problem domain. And even if you think you can do the 100% complete requirements analysis, who says the solution/software you deliver in 3 years from now will exactly fullfil these requirements and - more over - what makes you assume the problems of today are the same problems of the business in 3 years from now?


Well, the moment you talk about 'possible solutions', you've already accepted the original claim. A problem described in vague terms has many possible solutions. Some of them will actually solve the exact problem, some of them won't. Deciding which is which is not trivial.

Ho do you go about it? Do you build and deliver all possible solutions, and then the customer gets to chose one? Do you prototype many possible solutions, agree with the customer which prototype is most promising, and turn that into the final deliverable, hoping you had captured the relevant details?

Or do you start working on a basic solution, let the customer use that and provide detailed feedback on what it's doing well or not, rinse and repeat until the customer says 'good enough, thanks'?


Let’s say the high level problem statement is that a company needs to add a Search feature to their Store Inventory product.

Here are some questions that can arise during this course of this project:

1. What fraction of customers are asking for it, and how badly? How do you know their judgment is correct? What if people who aren’t asking for it may also go on to love it?

2. How deep and fast do you want the Search functionality to be?

3. How much time and money are you willing to invest in it? What if you find out after a month of work that Search is much harder than it looks?

4. Let’s say you discover there’s a bug in the indexing system which leaks personally identifiable information even though it’s supposed to hide it. Will you postpone the project till the bug is fixed, or work around it somehow?

It’s nearly impossible to answer these kind of questions right at the start of the project.

That doesn’t mean there aren’t projects where all the requirements are known upfront in precise detail. Usually that correlates with the associated technologies being very mature and their capabilities well understood by the people involved.


it's an old and widely accepted truism. I don't know how much formal research had been done into it.


Formal research into software development techniques tends to be junk. It's very very hard to conduct a meaningful study of professional-scale development. The costs are too high, the confounding variables too many, and the industry demand too low.

Typically you'll get software researchers with no industry experience conducting studies on "subjects of convenience"—their undergraduate programming classes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: