Hacker News new | past | comments | ask | show | jobs | submit login

> > My question isn't whether you can test enough to catch all bugs. My question is whether time spent wrangling types gets you more value than time spent writing tests.

> Yes, by a tremendous amount, in my experience.

Well, then I'd have to ask what your experience is that causes you to believe this? I don't mean years, I mean what languages, and what, more specifically, you observed.

> Tests are only as good as the person writing them. I could see a model working where the person that wrote the code isn't the person that writes the test, but that's definitely not how most development orgs work. If a dev is good enough/capable of writing comprehensive enough tests to accurately test the correctness of their code, that's great, but almost none are (I say almost because I actually mean "actually none" but am leaving room for my own error).

Static types are also only as good as the person writing them. C's type system, for example, lets through a wide variety of type errors. And users of a type system can easily bypass a type system or extend it poorly: I've written a lot of C#, and while C# has a type system which, when effectively used, can be extremely effective, I've also seen it used with dependency injection to cause all sorts of tricky bugs.

I don't think we can conclude much from bad programmers doing bad things except that good programmers are better, which is practically tautological.

> If you're an average dev, you'll write average tests (neither of these are insults), but that means you still won't catch everything (by a lot).

So? "Catching everything" isn't a thing--if you're talking about that, you're not talking about reality. There are two systems I've ever heard of which might not have any bugs--and in both cases an absurd amount of effort was put into verification (far beyond static types), which, even late in the process, still caught a few bugs. Static types aren't adequate to catch everything either.

> > But in the vast majority of modern software, it mostly just matters that you catch and fix bugs quickly--whether you catch those bugs at compile time or runtime is usually not as critical.

> I couldn't possibly disagree more with this statement.

shrug Okay... To be clear, "runtime" doesn't mean "in production".




> Well, then I'd have to ask what your experience is that causes you to believe this? I don't mean years, I mean what languages, and what, more specifically, you observed.

Probably 20 Rails developers across 3 companies

> So? "Catching everything" isn't a thing

The "everything" I'm referring to is less about "all bugs", and more about "Type errors, spelling mistakes, missing imports, etc". All code has bugs. Not all languages allow remedial spelling errors to make it into a build.

> shrug Okay... To be clear, "runtime" doesn't mean "in production".

It sure does not. I'm not sure that changes anything. It seems like being difficult for the sake of it to argue that it's the same value to catch potential bugs now vs. later. Obviously the answer is now.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: