I'd be concerned that the approach you laid out is something of a proxy for lines of code delivered.
Absolutely incorrect. Did you actually read my proposal? If the tests are of high quality, then the code passing them will be substantive. Also, in new development, what matters is functional specs delivered, and in a properly run TDD project, these two are strongly related.
My thought was that judging productivity by lines of code or by tests written (whatever the quality) is judging by what was done rather than by what should be done.
An extreme example of this sort of thing would be a programmer who looks at what needs to be done and says: "We don't need to write ANY code there's an open source app/library for doing exactly what we need here."
By making that suggestion it's quite possible that they've saved their company months of work (versus implementing everything themselves). However, in a purely Number of Tests written * Quality of tests metric they're a miserable failure.
I think TDD and tests are a solid way to write software, but I don't think they're a great way to judge programmer productivity.
That would be correct, except that the tests are peer reviewed in my proposal. You can't "code me a new minivan" in this situation, unless the test review process gets corrupted.
My most productive days are the ones where I've removed huge blocks of unnecessary code.