Hacker News new | past | comments | ask | show | jobs | submit login

Ability for precise modeling is the most important thing that separates programmers from non-programmers. Dealing with complexity is probably the runner-up, but domain experts should be just as good or better in handling this one.

It's the need for precision that brings out the complexity. In my experience working with domain experts, they usually don't realize how complex their domain is. Only when we start trying to nail things down precisely do the contradictions and edge cases come out of the woodwork. Invariably it's order(s) of magnitude harder than it looked at first. Usually they are surprised and say they had no idea there was so much to do.

So while I agree that domain experts are accomplished at handling complexity, they do so the way that humans do it (ambiguously and inconsistently), not the way that computers do it (through formal specification). The ultimate bottleneck in software development is that there just aren't that many humans who can do the latter well.

I agree that if programming ever does escape the need for precision, then the game changes, completely and unrecognizably. A pet belief is that this would require computers that work more like the way humans do (down to the lowest level), but that's a wild guess. In any case, I haven't seen any evidence of things progressing in this direction, have you?




Humans resolve a lot of the complexity intuitively and unconsciously. It is not an inherently bad way to handle it, just incompatible with current programming paradigms. This is not to discount the value of clear explicit thinking in humans, it is just to say that programming would benefit a lot from ability to apply common sense to mundane things.

For some dramatic evidence that imprecise is better and it really works, look at the success of Google and the non-delivery of the Semantic Web. Google's approach is probably best explained by Peter Norvig in his "Theorizing from Data" lecture:

http://www.youtube.com/watch?v=nU8DcBF-qo4

State of the art in natural language processing is achieved with statistical inference, not building precise linguistic models. See e.g. Marti Hearst's text here:

http://www.edge.org/q2008/q08_7.html

Back to programming languages, I believe that a lot of the Ruby/Rails success is due to mimicking natural language in DSLs. Rails goes as far as using pluralization. DSLs bring convenience at the price of less understanding: people tend to try using them intuitively, without understanding precisely what happens, much more so than they do with libraries/frameworks.

Perl, similarly, was created with linguistic principles in mind:

http://www.wall.org/~larry/natural.html

It is by no means conclusive evidence, but I believe there is an underlying trend.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: