Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A friend of mine used this successfully with groups practicing Test Driven Development:

    - First, institute "Test First" development
    - Randomly pick some fraction of new tests to be reviewed 
        (Demand a minimum quality level.)
    - Measure productivity in terms of new passing tests completed
        with some multiplier for the quality score
This system could be gamed, but it would require a conspiracy consisting of a large fraction of the team, and any system could be gamed in that situation. This is the only method I know of that works when analyzed logically and has been shown to work in practice.


Which would be rewarded in your situation, building 30 really simple pages by hand or spending 1/2 that much time to code something which generated those pages automatically?

Once again, the best programers write as little code as possible which allows them to focus on quality over quantity. It's easy to turn 10 lines of code into 20 classes and gain nothing, it's much harder to see those 10 lines of code once someone has written the 20 classes.


Which would be rewarded in your situation, building 30 really simple pages by hand or spending 1/2 that much time to code something which generated those pages automatically?

This method doesn't have web pages in mind. More along the lines of something like domain models for something like energy trading.

Writing tests for 20 shallow, repetitive classes would result in 20 shallow looking test classes, and that programmer would be called on it.

Web pages are really a narrow area of programming.

Once again, the best programers write as little code as possible which allows them to focus on quality over quantity.

How many functional specs are they accomplishing while they are doing this? I've known programmers who've created "entire new functional sections" in their app with 25 lines of code. Your issue is addressed by paying attention to functional specs.


Look, all metrics can be gamed.

Take an existing project find all the places that link to each section of code and you have some idea how reusable things are. Tell people your doing this ahead of time and you promote spaghetti code. Ask people how difficult an objective is and you get a wide range of biases based on how you use the information etc.

The secret is not how to get the most reliable data, it's how to get the best outcomes including they way people try to game the system.


Look, all metrics can be gamed.

Look, now it's obvious you didn't carefully read the original comment! (Left as exercise.)


I'd be concerned that the approach you laid out is something of a proxy for lines of code delivered.

My most productive days are the ones where I've removed huge blocks of unnecessary code.


I'd be concerned that the approach you laid out is something of a proxy for lines of code delivered.

Absolutely incorrect. Did you actually read my proposal? If the tests are of high quality, then the code passing them will be substantive. Also, in new development, what matters is functional specs delivered, and in a properly run TDD project, these two are strongly related.


My thought was that judging productivity by lines of code or by tests written (whatever the quality) is judging by what was done rather than by what should be done.

An extreme example of this sort of thing would be a programmer who looks at what needs to be done and says: "We don't need to write ANY code there's an open source app/library for doing exactly what we need here."

By making that suggestion it's quite possible that they've saved their company months of work (versus implementing everything themselves). However, in a purely Number of Tests written * Quality of tests metric they're a miserable failure.

I think TDD and tests are a solid way to write software, but I don't think they're a great way to judge programmer productivity.


KLOC mentality applied to TDD.


That would be correct, except that the tests are peer reviewed in my proposal. You can't "code me a new minivan" in this situation, unless the test review process gets corrupted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: