Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Anything less than 100% correct is incorrect, it isn't necessarily broken.

> run and/or produce useful answers (it would depend on the input)

The fact companies can and do sell broken products that generate wrong answers, lock-up or crash from time to time has no relation to the fact the products are broken.

An old joke said that IBM stood for "Inferior, but marketable"



You'll note, that isn't what I was arguing.

You can have code that is not 100% correct, it still runs and generates reasonable answers, without crashing, and is very useful. Even code that has been worked on for years still has subtle bugs lurking in it.

Take the Linux kernel. It has bugs in it. Somewhere, there is a bug! I'm 100% sure of this. Same with Windows, OSX.

A non-trivial piece of software with a bug is a tautology. We don't say that these products are broken, because then all software products would necessarily be broken.

Sure, in software development we are doing our damnedest to approximate correctness, but we never get there.

A tool that writes approximately correct code would be very useful.


How would it be useful to me, if I spent my time scrutinizing and debugging the code output by the tool?

I'm sorry, but this kind of inconsiderate, care-free attitude about bugs is an obstacle to me as a developer, to the brink of being downright offensive. Assuming of course we're writing code which is more complex than Hello World.

All I want, as a programmer, is a tool which aids me in reading, writing and manipulating code, without getting in the way. A clone feature is completely useless to me; a proper IDE has (or would have, actually) a library of code that I can choose from for my purposes, or perhaps would let me choose from my own snippets I've written in the past. Simply letting me drag a function to the point(s) in the code I'd like it to be called and then placing the necessary #include directives and/or the necessary compiler/linker options, and perhaps taking care of some parameters based on default values, a conceptually simple task, seems like science fiction to vim or emacs.

It's utterly stupid. The article is right on. We're wasting our time on syntax errors.

We don't appreciate our craft.


"How would it be useful to me, if I spent my time scrutinizing and debugging the code output by the tool?"

How would it be useful to me, if I spent my time scrutinizing and debugging code put out by myself?

I am going to do the scrutinizing and debugging either way!

How is my attitude about bugs carefree? They are everywhere. They are inevitable. That isn't carefree, those are the facts of life. You screw up, I screw up, Tom screws up. (I consider myself more of the paranoid camp wrt bugs, personally).

I agree that at a minimum an IDE has to do the things you listed.

If I had a tool that generated code pretty much as good as mine, that I could debug and ship, where would the loss be? The tool doesn't exist, and probably won't exist, but if it did, it would be awesome.


"How would it be useful to me, if I spent my time scrutinizing and debugging the code output by the tool?"

How would it be useful to me, if I spent my time scrutinizing and debugging code put out by myself? ------

Small point: I'd much rather debug errors I introduced than ones a program did. If I made the errors, they will have a familiarity about them - "oh, I always misspell that," or "I wasn't paying attention there." I can't predict how a program might get it wrong.


I'd much rather debug errors I didn't write myself, they are much easier to find. I find that code I wrote (often) scans the way I intended to write it, not the way I wrote it.


"How would it be useful to me, if I spent my time scrutinizing and debugging the code output by the tool?"

How would it be useful to me, if I spent my time scrutinizing and debugging code put out by myself?

To contrast those too, consider the familiarity you already possess with your code, as opposed to the code that was written by others.


<sarcasm> In other words your code validates past computation and adds error correction code to all files you have to the HDD? After all if you don't do that then it will often produce buggy output and crash. (http://www.pcguide.com/ref/hdd/perf/qual/specRates-c.html) </sarcasm>

At best software running on commodity hardware works most of the time and acceptably broken really is good enough. Granted we want to minimize failures, but there is a wide range between good enough to be useful and perfect.


Yeah, that's why we don't have anyone fixing bugs encountered in software we use.

Please.

PS. Who cares about perfect? You can't have perfect, and that wasn't my point. Please reread.

PPS. I'm happy to see you're invalidating all the subject of my current work (carrier-grade systems) based on what's good enough on commodity systems. Extrapolating is bad.

PPPS. Sarcasm tags are so 17th century Protestant.


That mindset often results in over engineered crap.

I once wrote a program to parse a source code in ways that is provably limited. However, based on way the code was written it worked often enough to be extremely useful. Just because an approach can never cover all cases does not mean must be ignored.

Sniper bullets have tighter tolerances than the average bullet but this increase costs and becomes useless at high rates of fire. Suggesting all systems need to be as accurate as possible is ignorant.

PS: I have written and used plenty of code sitting on DoD computers and believe me it’s not all golden. Walk around the Pentagon and guess what Windows is sitting on the vast majority of peoples desks.


Again, you're extrapolating just to prove me wrong. I admit it may seem 'extreme' to some, but advanced tools aren't a threat to us. However, our current culture as programmers shows that we don't even trust other people's source code, let alone tools, to do our job. In this context of today, a tool that "writes code" (imperfect, of course) is not to be trusted any more than me, you, or our colleagues.

But do not downplay the importance of not producing, fixing, and avoiding bugs. It's just wrong.


There are basically two ways to write software you can either completely understand the problem and create a completely understood solution, or you can approximate a reasonable solution and then patch. Now when it’s possible the first approach is far better, unfortunately some of the worst systems where created when people tried to use the first approach when it was simply too complex for any one person to understand.

If you have ever copied a function changed the name and modified the code you are on the second path. Now plenty of people have gone down that path and know it’s likely to produce bugs, but as long as you have reasonable mitigation strategies it can still be a good idea. Building a tool that does similar things with larger sections of code would be dangerous, but with a little care it could still be useful.

In the early days of Fortran compilers they tended to be buggy so people would often money patch the output. http://en.wikipedia.org/wiki/Monkey_patch While error prone this was significantly faster than hand coding in assembler from the start. This died out with the invention of better compilers, but plenty of good software was written before then.

PS: I wish most software could be written using the first approach. I just don't think that's possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: