Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Writing good tests is an art. Its hard. It takes a deep understanding of _how_ the system is implemented, what should be tested and what should be left alone.

Coverage results don't mean much. Takes some experience to know how easy it is to introduce a major bug with 100% test coverage.

Tests are supposed to tell you if a piece of code works as it should. But I have found no good way of judging how well a test suite actually works. You somehow need tests for tests and to version the test suite.

A overemphasis on testing also makes the code very brittle and a pain to work with. Simple refactorings and text changes need dozens of tests to be fixed. library changes break things in weird ways.

Unless I know the system being tested, I take no interest in tests.

There's clever hacky ways to test systems that will never pass the "100% coverage" requirement and are a joy to work with. But they're the exception.



The point about coverage results is an important one to understand. Something that I like to say when discussing this with other folks is that while high code coverage does not tell you that you have a good test suite, low code coverage does tell you that you have a poor one. It's one metric amongst many that should be used to measure your code quality, it's not the end-all-be-all.


code coverage is a bad metric either way. soon as it gets mentioned anywhere, an mba manager wants it as close to 100 as possible and goodhart's law kicks in.

it's synonymous with LOC. don't bring it up anywhere.


There are techniques to keep test quality high.

However, usually no one really cares about testing at all. Also many projects are internal, not critical, etc.

Make fast, break things, deliver crappy software.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: