TDD can be great if you know exactly what you're building and what you must test for, but it gets in the way when you're rapidly prototyping and exploring.
Just don't be dogmatic about practices like TDD. It's always a tradeoff. Being a good engineer means understanding when the tradeoff is worth it and when it isn't. Be wary of development teams, or people, that force you to practice certain methodologies regardless of context.
I think it depends on what you're exploring, honestly.
TDD comes out of the same circles as Extreme Programming, aka XP. In XP, there's something known as a spike, which is basically a short-lived technical prototype. If you really don't know how to tackle something, you build a quick one to throw away.
In that case, yes, definitely don't do TDD, because a goal of TDD is to build up a good test suite for production code. It's not a good match for throwaway code (assuming the team really has the discipline to throw it away.)
But if I'm exploring in a way that feels less disposable to me, which happens often, then I'm happy to use TDD. E.g., if I want to try out a new way of solving a problem, or I have an idea for an architectural improvement. Then I'll use TDD to force myself to think about it from the outside in, to keep design considerations to the forefront.
TDD can be great if you know exactly what you're building and what you must test for, but it gets in the way when you're rapidly prototyping and exploring.
It sounds like you're making the mistake of thinking that TDD means writing the tests first. That's not quite right but it's understandable that lots of people believe it given the way TDD is usually discussed.
In TDD you should be writing one test first. Then you write enough code to make that test pass. Then add a second test. Get that to pass. And so on. If you realise you missed something you can delete tests that don't make sense any morejuat like you delete code that doesn't make sense. In essence TDD is about writing the tests in parallel to writing the code instead of writing them afterwards.
You can easily prototype and experiment with TDD because you're never much further ahead with your tests.
TDD is faster than writing tests later because your code has to be testable throughout the process. Writing tests later often means you need to refactor or rebuild to make it testable, which is a waste of effort, even in a prototype.
I think what Kent is saying there is that he knows which tests he can skip writing because they provide little value. If you're as experienced and knowledgable as Kent then that's probably true.
For the rest of us it's better to write too many tests than too few. You might waste some time, but that a smaller problem than accidentally skipping a test that was actually useful and pushed your code in a better direction than you'd have gone without it.
> For the rest of us it's better to write too many tests than too few. You might waste some time
That's a very bad attitude that is all but enforced by TDD proponents.
It's very easy to see which tests are useless and which are not. So you end up with people writing thousands of unit tests and little-to-no integration or functional tests because:
- "TDD told me so", and
- Testing frameworks are written by people who adhere to the same philosophy
Whereas there's very little utility in those "too many tests" while they give you a very false sense of security.
It's very easy to see which tests are useless and which are not.
For trivial things, maybe, but then people start applying the 'rule' to non-trivial things "because it's easy" and they get it wrong. All I'm saying is that until you are a Kent Beck level expert it's safer to err on the side of caution. When you have a ton of experience and knowledge you can do what you like.
Also note that I said its better to write too many tests than too few - that still doesn't mean you should test everything, or that you have to test trivial things. It just means when you're not 100% sure it's safer to write a test.
> For trivial things, maybe, but then people start applying the 'rule' to non-trivial things "because it's easy" and they get it wrong. All I'm saying is that until you are a Kent Beck level expert it's safer
Most of the stuff we do is trivial [1]. Unfortunately no one really teaches testing or shows what needs ot be tested. Hence the prevalent "write hundreds of tests perhaps some of them will actually be useful". And most available examples, and most available advice is that: write many many useless tests.
I have a perfect example from the Java + Spring world. It's common to have a Controller + Service + Facade + external service client definition.
So I've seen countless times when tests are written as follows:
- external service client is mocked, multiple unit tests for the Facade to make sure it returns data
- Facade is mocked, multiple unit tests for Service, to make sure data is returned
- Service is mocked, mutiple unit tests for Controller, to make sure ata is returned
They are all the same tests, and can be easily deleted and replaced by a single suite of:
- external service is mocked, including invalid responses and timeouts. The actual rest service provided by the app is tested for all the scenarious that were quadruplicated in the unit tests above
You don't need to be a "kent Beck level expert" to do that. However, almost literally nothing teaches you to do that or helps you write those tests. Almost literally everything is hardwired to write small useless unit tests.
[1] Except UIs. I have no idea how to test UIs, and I don't think anyone does :D
The code I write initially can often be very dumb, repetitive and procedural. I often have to discover and see what the data structures really look like, what the public API looks like, what I leave in code and what I want to be data driven etc.
At this stage it makes little to no sense to write tests first. All I need is to see the output, which might change dramatically during development. It’s a very fast, direct way of programming, with a lot of copy pasting and long parameter lists.
When the code is laid out like that, then I can see where it’s going. I see the repetitions, the edge cases etc. Now it starts to make sense to design the data structures, give declarative names, find the right abstractions. Now I can move forward with tests.
>The code I write initially can often be very dumb, repetitive and procedural. I often have to discover and see what the data structures really look like, what the public API looks like, what I leave in code and what I want to be data driven etc.
I do this plenty after writing a new (high level integration) test.
Usually I have a fairly clear idea of what the public API I want should look like before I code. A test that calls that API makes a good test. I might tweak the way it is called when coding the implementation, but probably not by much. Underneath it the integration test doesn't care what kind of data structures you use and it provides a safety harness to let you discover at will.
>At this stage it makes little to no sense to write tests first. All I need is to see the output, which might change dramatically during development.
I frequently write the test first for this type of code - e.g. APIs that output JSON blobs or command line apps that output a wall of text.
Where I'm expecting the output to change I program the test to temporarily overwrite it with the results of the program, eyeball the result and commit it all together if it's correct.
But what test do I write? What does the poorly documented 3rd party API return? Is the test failing because there's an edge case in the API or because my code is wrong?
The very first test can be a simple "Did my HTTP request receive a response?". Then you can build on "Does this HTTP response have this value I need"....
etc
The way I have always gone about TDD is just that I am testing the code I am writing by running the test, not from the main entrypoint of the application. The things you would log and look for when you run the application you instead validate with an `assert()`. Then once you have finished developing, you do a single pass verification from main and you have both a test and a function written
Do you see how playing this game of writing iterative tests would actually provude slower learnings than prototyping a call to an API and receiving a result that would inform a series of tests that would be much more informed?
It's just as easy to be inefficient writing useless/bad tests than it is writing bad code. I think every developer should do TDD in their career, because it will make you think about the testability of the code. That said once you understand how to write testable code than I think it makes sense to be flexible on whether you write the test first or take a crack at an implementation.
The argument that you should have a good understanding of requirements I think leads to a lot of analysis paralysis, and isn't a practical method for building novel things in a timely manner.
I think the key distinction here is "run my new function through the entry point of the application" vs "run my new function from a test". You leave out whatever startup time is involved from the rest of the application to get to that function, & you have a tiny feedback loop to test the function.
You have your test which gets the response, and you can still log the output to see what it looks like. It's a little slower at first since you have to write the test, but I'm not sure I see a big distinction vs "have a small script that sends the request".
> Do you see how playing this game of writing iterative tests would actually provude slower learnings than prototyping a call to an API and receiving a result that would inform a series of tests that would be much more informed?
And do you see that in many examples just coding ahead without conaidering how to test the code later will in many cases leave you with code that is such a pain to test that you start avoiding the tests because "they take too much time".
If you start with the test the question of "how can this be made testable" is unavoidable.
I am not dogmatic here. If you are operating on the level of "I am happy if anything works at all" then starting with writing a test is probably a bad idea, better try things out first, scrap the whole thing once you understood what you wanna do in which way and then start writing the tests.
For a lot of adhoc code test driven development doesn't make too much sense, but if the stuff you write could ruin someones day (or life) maybe it is the way to go.
You start by allowing external requests from inside the test and automate your exploration of the API. Then you slowly evolve it to mock or stub the elements that interact with the external system so you can run your tests in isolation.
There is a difference between writing tests once you're done with your exploratory phase and know what you have to code up and doing full blown TDD from the beginning.
Not really, TDD helps you to think about what you're writing and even when you're prototyping from scratch gives you a good overview of the complexity of what you are writing, because you can't just finish a method that is few tens of lines long then try to test it, you write small test case, then you evolve the method that is being tested by the unit test and so on, and you can see when it's becoming too complex, I think some people just is not used to small methods whose responsibilities are well thought beforehand and just try to be a cowboy, TDD is about the quality of the code, the clarity of code, mantainability of code, as unit test, not necessarily the correctness of code in production
Here's the thing. When I'm rapidly prototyping and exploring, I'm constantly asking questions and making hypotheses about what the thing I'm exploring will do when I turn various knobs and pull different levers. Part of any good process of making guesses and comparing the expected and actual results is making notes about the test and results. Those notes could be written in natural language, and many notes should start there, but why not formalize the guesses and discoveries and write them as executable, reproducible, tests?
Essentially if you don't have an existing code structure in place, TDD will be very confusing and you'll end up tangling yourself into a coding mess.
On the contrary, if you have a structure in place and need to ensure the correctness of the code you're about to write, it is incredibly useful.
Building my own stuff, I hardly ever write any tests tho as in the early stages of a project, testing will slow you down as the tests are essentially locking your code in. For me personally, I prefer to have my code be more fluid so I can rapidly change it without having the overhead of having to write and rewrite the corresponding tests.
When I come across some tricky logic tho I will definitely crank out some tests as it actually helps to create the solution.
As usual context is key to knowing whether TDD (or even testing) is appropriate or not.
I disagree. TDD shines most when you don't know what you are building. It allows you to to test hypotheses without needing to over-implement a system to be able to begin to gather data about your design.
In fact, if you know exactly what you are building, how could tests drive your decisions? You have already made up your mind. If you know what you are building you can simply implement it straight away without the data gathering phase.
> if you know exactly what you're building and what you must test for
This is the main issue that I have with TDD. I use a methodology that I call "Evolutionary Design,"[0] and TDD won't work for that.
But I think they have the right idea, in emphasizing the need for extreme Quality. I like to achieve this, by using test harnesses, as opposed to unit tests[1].
> but it gets in the way when you're rapidly prototyping and exploring.
If you prototype a new features for an existing system, a test let you execute only the code you actually need. This will shorten your feedback loop and allow you to iterate faster. Refactoring a test is fine. Writing a BS test just to explore a solution is fine.
In my experience following a test lead practice will help you build a simpler system which will be easier to maintain.
I tend to view TDD as a form of coding using a REPL, like in Lisp, where you can easily test chunks of your code as you go along. The main difference is that you leave behind your tests so they can be used in the future.
Eh? You can’t prototype or explore if you have no idea what you are building. You obviously have some idea (3rd person shooter or accounting system?) but might not be sure how to organize your code.
However well written tests are independent of how you organize your code or any other implementation details. So tests are great even when you are just prototyping or exploring. You can rapidly change your code and have the tests ensure that it still works the same way. And the end result is a fully tested, easy to maintain code base. What’s not to love about that?
TDD can be great if you know exactly what you're building and what you must test for
But you should know this from your functional requirements, right?
(Slightly sarcastic comment, I know in a lot of places people just start coding without a plan, but I think TDD and functional requirements go together well).
Well, I don't do that. But i usually don't know all the names of my (internal) interfaces and models and properties right at the start. I might move things around and rename stuff along the way. And I think that's quite normal if you don't live in a world where the spec for a small task is multiple books thick.
Doesn't TDD get in the way with? Wouldn't I keep refactoring the tests all the time then?
> But you should know this from your functional requirements, right?
Functional requirements describe observable behaviour. You can't write any meaningful tests for that behaviour until a lot of your code is written.
Yet somehow in all the discussions there's this unexplaned leap from "you have the requirements" -> "well you just write a unit test for your function first, and then code it".
My functional requirements come in the form of "Given publisher A in Country B and contract requirements C the service must return D for a subset of products E", for multiple combinations of A, B, C, D, and E. And where each of those may and will require lookups in external services and databases.
I develop products much faster with automated tests.
And people always ask if it's really true every time I say that.
What I've found is that you need to reduce the friction to writing tests, if you want them to get written. Once it's easier to write automated tests than to test manually, the benefits are a lot more clearer.
Manual testing has a much higher, but hidden, cost.
If the friction is higher, tests will be forgotten amid the pressure from stakeholders and deadlines.
Once you discover the benefits of low friction automated tests and write testable code by default, even you don't practice TDD by the book, you can get some of the benefits mentally.
And if you are using static typing, the amount of tests you need write goes down significantly.
My beef with TDD is the same as my beef with a lot of estimation techniques. They really need for you to have done the thing before. Faith that you "know what tests are needed" is surprisingly loaded. Unless you are writing, effectively, integration tests for specific layer. But, at that point there is no guarantee that you know how to write the magic in between. (There was an interesting attempt at a sudoku solver using TDD once. It... didn't end well.)
Though, typing that last part out. Maybe the concern is over the level of the tests. Unit tests are surprisingly easy to understand based on how you have divided out the code. However, there is very little that is required on how you have done this division. Path dependence is a huge thing to define what divisions of code you will end up with. Such that I have a hard time seeing how that works in a TDD fashion.
I don't find much need to have done the thing before.
Before I write a line of code, test or no, I usually have some idea of what the code should do. Then I run the code to see if it does what I want. So if I'm not doing TDD, I'm still doing "manual test test first"; the test is just in my head.
TDD is just saying, "Let's start by automating a bit of that test and make the test fails correctly. Then we write production code until the test passes. Once it goes green, we look at what we've got, see how we like it, and maybe refactor a bit before writing the next test."
If I'm totally off the map and flailing around, yeah, I won't write tests first. I'll hack at something until I have a clue. And then I'll throw that code away and write my first test. Which is something I should do regardless of whether I'm doing TDD, as scratch code like that is too messy to keep. Easier to dump it and start fresh.
I feel exactly the same way. TDD helps me think about the requirements, because even if I’m building something as a prototype, I still have specific requirements in mind. Those will change as I get feedback for the prototype, but even a really, really simple test will help me be more secure in knowing my reworking of the prototype didn’t break some very fundamental needs of the system.
My guidance would be to not start with many tests when writing a prototype / initial development of something - just write one or two of the most basic things you’re confident you’ll need. As your finished product comes into focus, then you can add more tests.
I don’t literally write a test before every line of code, yet I still consider what I do to mostly be TDD.
Absolutely. For me it's an incremental, iterative process. I write a tiny bit of test. And then I make it pass, often in a way that's a little too simple. So then I'll go back and improve the test, which forces me to improve the code.
Writing too many tests feels like getting ahead of myself. It feels like a bet that I won't learn anything or think of anything new as I go. Which is a bet I don't like making, because it often becomes self-fulfilling.
Yeah, almost everything good in tech attracts people who push a good thing too far. If not for working with some reasonable advocates, the FP zealots would have seriously put me off the techniques entirely.
> My beef with TDD is the same as my beef with a lot of estimation techniques. They really need for you to have done the thing before. Faith that you "know what tests are needed" is surprisingly loaded.
Why would you nee to know what tests are needed? Just about any bit of functionality you can come up with to implement you should be able to come up with a test to test it. It's not really harder to know what code to implement than it is to know what test to exercise that implementation.
And at any point in time you only need to think of a single test before you're back to coding. It's not like you need to plan out 50 tests up front. Whatever the next handful of lines of code you're going to write write a test for that first, then write those same lines. Then pick the next handful via test, then write them.
I don't understand the difference in knowledge at all.
The best example I can think of on this, is to look up that TDD creation of a Sudoku solver. This is collected in an old HN post at https://news.ycombinator.com/item?id=3033446.
It is reasonable to think you can know what functionality you need to test. And, for certain, if you know the boundaries of the system you are building, you will know some. That said, there is a tendency to build more boundaries than needed in our industry, period. Would be like a community garden that thought they should arrange things by type of plant that will be created by the community. When the actual divisions should be on how many community members you will have.
Fair, "best" is clearly the wrong word there. A pointed example, then? :D
That said, I'm more than game for alternative examples. Have any that show benefits? Most examples I ever see fall, at best, into drawing the rest of the owl.
The more I progress through my career, the more useless unit tests become.
My requirements are in terms of observable program behaviour, and that is what needs testing. Not the hindreds/thousands of tiny one-off tests in between. And... you can't test that observable behaviour with TDD until you've written a solid chunk in between.
TDD agrees that you should only test observable behaviour. There should be no need for you to implement a large chunk of your application for the first test to pass. Are you writing many tests before you begin implementation? TDD promotes that you start with just one.
> Are you writing many tests before you begin implementation?
No. I usually write functional/integration tests after I've written most of the code.
> TDD promotes that you start with just one.
And that one will be absolutely entirely useless. Because what would be that "first one test"?
Let's assume it's a REST or GraphQL service that returns aggregated data from multiple external services. Until you have more-or-less functioning code (including all model definitions, client definitions, data transforms etc.) you can't even write a "first test for observable behaviour". Also here: https://news.ycombinator.com/item?id=34765360
Whatever you want. Just pick some small attribute that you expect to observe.
I'd personally start with failure modes as they are the most important and interesting code you will write. Hell, you can likely not even bother testing the so-called "happy path" because, really, who cares what happens during success? Failure is where you want users and future readers to have the most documentation to ensure that when things go wrong the metaphorical bridge doesn't come crashing down into the river but rather remains in a repairable state.
If it's a REST endpoint that aggregates data from multiple external services, you probably want to see a "retry after" response sent to the client if any of those external services are temporarily unavailable. So write a test that asserts that, then write code that does it. This is also a nice first case as you can detect an unavailable external service quite early in your pipeline, requiring very little of the implementation to see the test pass.
No need for religion. If documenting your code later works for you, go for it. Nobody cares. Tests won't be driving your development, so it won't be TDD, but it'll be something and that may be enough. However, in reality most people simply don't have the wherewithal to plan for things like being able to test unavailable remote services if they don't have a test case in front of them forcing them to and won't bother when it is too hard to add later, so there is something to be said about the TDD approach as a general rule, deviating only after you fully understand the tradeoffs.
> Just pick some small attribute that you expect to observe.
The expected "attribute" is correct data being returned.
> Hell, you can likely not even bother testing the so-called "happy path" because, really, who cares what happens during success?
wat?
> If it's a REST endpoint that aggregates data from multiple external services, you probably want to see a "retry after" response sent to the client if any of those external services are temporarily unavailable. So write a test that asserts that, then write code that does it.
> However, in reality most people simply don't have the wherewithal to plan for things like being able to test unavailable remote services if they don't have a test case in front of them forcing them
This is a weird statement that is definitely not backed up by reality.
> there is something to be said about the TDD approach as a general rule, deviating only after you fully understand the tradeoffs.
The problem is, everyone advocates writing multiple useless tests, including TDD proponents. You can't learn "tradeoffs" until you look past the dogma. Once you've looked past the dogma these "tradeoffs" most of them are not tradeoffs, but trivial things that you would do anyway.
> The expected "attribute" is correct data being returned.
Even in the simple example given, there are probably hundreds of different failure cases to account for. The "correct data being returned" will never be a single attribute unless your application does effectively nothing.
> wat?
The successful case is usually the most understood behaviour, most visible, and easiest to recreate. If you are going to skimp out on documenting some aspects of your code because you hate future developers, the most understood aspect is where you are best to skimp.
> So again you want me to write a meaningless test that will be duplicated anyway during the actual proper test.
No. Why would you write the same documentation over and over again? This makes no sense.
> This is a weird statement that is definitely not backed up by reality.
It is certainly backed up every time I have worked with developers who put no effort into making their code testable, making it way harder than it should be to test certain cases afterwards. And in my experience they usually just throw their hands up in the air and say "testing takes too long" after they've made it unnecessarily hard to test.
Is it that you always work alone? If so, I can see why you think documentation isn't that useful. If you have a decent memory, it probably isn't. But most people have to work in teams or are in positions where they will one day be replaced and aren't writing documentation for just for themselves.
After all, that's what TDD – and, really, testing in general – is all about: Documentation.
> The problem is, everyone advocates writing multiple useless tests
Nobody advocates writing useless tests. Future readers should be able to learn something useful from every documentation item you write. That your function will return "retry after" when an upstream service is temporarily unavailable is very useful information for users and future developers to know. This is worth documenting.
You will undoubtedly need to write multiple tests. Nobody wants to read documentation that is one big ball of mud. A sane developer will break different cases into different tests to help future readers clearly understand that there are different environmental cases, like the range of failure states, to consider.
> Even in the simple example given, there are probably hundreds of different failure cases to account for.
Definitely not hundreds. But yes, they need to be tested
> The successful case is usually the most understood behaviour, most visible, and easiest to recreate.
Tests are not written to document behaviour, but to validate that your program does what it's supposed to do. If you don't test successful cases, you don't even know if it runs correctly.
> If you are going to skimp out on documenting some aspects of your code because you hate future developers, the most understood aspect is where you are best to skimp.
I see you have the erroneous assumption that tests are documentation.
> It is certainly backed up every time I have worked with developers who put no effort into making their code testable, making it way harder than it should be to test certain cases afterwards.
This is slightly orthogonal to being able to write testable code etc. Most tests are trash to begin with, and most testing frameworks give next to zero help in designing and running actual tests you would really want.
I couldn't care less if a team mate wrote "untestable code" in some modules. In 99% of the time that code is some repetitive boilerplate anyway. Ironically, this is the code that everyone prioritises to make testable.
And then you want to test all that in concert... Ahahaha good luck. Even starting up a local rest service with mocked dependencies to test is a royal pain in the butt in most cases, and has nothing to do with testability of some small pieces of code.
> that's what TDD – and, really, testing in general – is all about: Documentation.
Tests are not documentation, have never been, and can never be viewed as such.
Testing is literally about testing: to test that your program behaves correctly given some requirements. It literally is in the name. You write tests not to educate your co-workers, but to make sure your program doesn't crash and burn the moment you push it to prod.
> Nobody advocates writing useless tests. Future readers should be able to learn something useful from every documentation item you write.
Again:
1. Tests are not documentation. Have never been, will never be.
2. Existing testing practices all but enforce writing useless tests, prioritising duplication, hundreds of unit tests etc. over what' actually needed.
> Nobody wants to read documentation that is one big ball of mud.
All tests are inevitably a ball of mud because they lack context, and they are not documentation.
> Definitely not hundreds. But yes, they need to be tested
Most likely hundreds. Each individual external service will easily have tens of conditions (a multitude of invalid input states, a multitude of malformed response payload states, a multitude of expected upstream error conditions, network interruption, port exhaustion, expired/invalid certificates, failed authentication/authorization, timeouts, etc., etc.) and when multiplied across them it won't be long before you have hundreds.
> Tests are not written to document behaviour, but to validate that your program does what it's supposed to do
A common misconception, but no. Tests would serve no purpose if you had no reason to document expected behaviour. Correct that tests also allow the machine to validate that what the documentation says is true, solving for the problems of the past where documentation and implementation would regularly fall out of sync. That is, indeed, the value proposition over writing the same information in Word instead.
Your documentation doesn't end there, of course, just as the documentation provided by a static type system is not the be all and end all of your documentation. But documentation it most certainly is.
> Existing testing practices all but enforce writing useless tests
The invented practices you have presented throughout the discussion no doubt lead to writing useless tests. It is, however, not clear why you hang on this invention of yours – outright ignoring what you are being given. Is it some kind of defence mechanism to protect the methods you have developed for yourself?
> Each individual external service will easily have tens of conditions
You will likely not propagate those to the end user, but log and return a single error for many of those cases
> A common misconception, but no.
A common misconception is that tests are documentation in any shape or form.
> Tests would serve no purpose if you had no reason to document expected behaviour.
You do not document behaviour with tests. You write the tests to make sure your app conforms to documented behaviour.
As documentation tests are useless because they are a collection of separate and disparate scenarios and cases lacking any context or continuity. If anyone ends up looking at tests for documentation on how your app or service behaves, I have bad news for you.
> Your documentation doesn't end there
Thts is literally the dead end of documentation. Well, the actual dead end is "code is documentation" and "types are documentation". Neither of them are. Documentation is documentation. Comments are documntation. Design docs and specs are documentation.
Tests, and especially the way most practices propose tests should be written, could not be farther from documentation.
> The invented practices you have presented throughout the discussion no doubt lead to writing useless tests.
Indeed they do.
> It is, however, not clear why you hang on this invention of yours – outright ignoring what you are being given.
So you have fabricated this idea that I am ignoring something and is now attempting to make me defend this fabrication of yours.
Quite likely given the exact example we speak of. Not likely for all cases.
> You will likely not propagate those to the end user, but log and return a single error for many of those cases
Agreed. But your code still has to get there and, unless you've given up on testing, test that each of those potentially hundreds of error states leads to the single error response with the appropriate log that you expect.
If you don't put any thought into testing upfront, this is where you can quickly end up with a mess that does really take too long to add testing to. And let's face it, those without much testing experience don't know much about what could be done to not create such a testing nightmare. I see it time and time again.
No need for religion. Do whatever you want and if you understand the tradeoffs are no doubt you are better off for it. But a lot of developers without much experience don't understand the tradeoffs.
> You write the tests to make sure your app conforms to documented behaviour.
Yes, exactly. And the document that documents said behaviour is.... Your tests. Not a bad practice to have additional documentation on top, but the tests will ultimately be your source of truth. They serve as the documentation that the code is tested against.
> Tests, and especially the way most practices propose tests should be written, could not be farther from documentation.
Go on. Are you creating your own unusual definition up on the spot here for the word documentation or is there something more profound in here?
> So you have fabricated this idea that I am ignoring something and is now attempting to make me defend this fabrication of yours.
I have observed this idea that you are ignoring what I've written. It's right there. I write something, you immediately reject it without any curiosity and then go off on some unrelated tangent that has nothing to do with what was said to justify your rejection. It is quite curious.
It could be that you simply don't understand, but normally when people don't understand they want to learn. Education is usually considered a desirable quality. Which left me wondering if this behaviour is some kind of defence mechanism to see that you don't gain any insights into other ways of working to protect what you feel is best?
I use tests all the time when implementing new features I haven’t implemented before. It makes me think about the API before implementing it and gives me a way to auto test it which is excellent.
TDD assume that you can write a test as the first step. Which sort of sounds reasonable if you don't think about it too much. Sometimes it is reasonable (e.g. leetcode, improving existing code, well defined problems).
But a lot of the time that is sneakily missing out the enormous steps of prototyping, experimenting, trying solutions that didn't turn out to be a good idea, finding out that the problem wasn't a good idea, etc. etc.
All that can take longer than actually implementing a solution.
So, fine use TDD where it works well but don't pretend it is the only way to do things.
Exactly. To me, TDD seems to be a way for experienced programmers to solve puzzles in a new way.
It's all very well saying that good architecture will emerge from TDD, but there's no way that's true unless the programmer has a good idea of where they are going in the first place.
If you know where you are going, what value would you derive from having tests drive your development? Why wouldn't you want to be the driver?
TDD is most useful when you don't know what you are getting into. It allows you to quickly run small experiments against your hypotheses to see if the fuzzy thoughts running around your head are useful or need to be thrown out. The data gained from those experiments is what drives your development decisions.
If you already have everything already figured out, what more data do you need?
Having some ADD tendencies, I love TDD for helping me focus and get started—without it I just daydream about software rather than write the software. Once I get going then I’m okay (though I still use TDD) but TDD and pair programming are both great for me, to get me started.
I also have two young kids and can absolutely relate to the author.
Yeah, I have diagnosed ADHD and I struggle staying focused and on task (and it occasionally gets me in trouble). But TDD gave me a framework to get things done and focused in a doable fashion.
For me switching between code and tests is a context switch. TDD approaches that I'm familiar with encourage frequent swapping between them which really drains my productivity. It's much easier for me start something and hyperfocus than it is to swap back and forth.
I think the true struggle of ADHD is that you’re in fact constantly context switching. There’s the great theory that being adhd is an evolutionary advantage because it made hunting easier.
I think it's harder to set up a project initially for it. And converting a brownfield project to tests is a bear.
But once it's working, it's the opposite of overwhelming. The red-green-refactor cycle of TDD lets you take work in very small chunks. And because you get great test coverage as part of it, there are way fewer landmines. For me at least, TDD in a good code base is the most soothing and productive way to work I've found so far.
I would say overwhelming and daunting is not knowing why your software becomes fragile and low quality without tests.
TDD gives you a sense of control. You know that you are done once your tests (read: documented requirements) pass.
And yes, I realize not doing TDD is not the same as not doing tests at all, but writing tests post-fact is just an excuse to wiggle out of writing them. "It worked when I coded it!"
Tests merely document expected behaviour of your application. If you are unsure of what behaviour your application should exhibit, you need to step away from the code and talk to your customers for a while.
Almost always when I see criticism of TDD it quickly becomes apparent that the person doing the criticizing either has no idea how the TDD process is meant to work, has no idea how to separate concerns when writing code, or both.
For example the claim that TDD impairs your ability to modify code is completely the opposite of my experience. It’s quite simple, just don’t write overly specific tests, and test your module interfaces (however your language implements them) rather than the internals. One code smell is that if your tests are getting in the way of refactoring you’re certainly doing it wrong, since refactoring is a key and I would say even defining part of the TDD development process.
Disciplined people are exceptional. Most people when forced to do TDD will just write a bunch of nonsense tests that adds to your technical debt, it requires discipline to write good tests.
I started appreciating TDD as I’ve been using GitHub CoPilot, the results it produces become scarily accurate after you write unit tests (that it helps you write). It’s actually an amazing workflow to just write out the tests first and then generate out the implementation with copilot.
This post seems to support my hypothesis about TDD: it is for the poor programmers. If your programming skills are limited (for whatever reason - you are junior or you are just sh*t tired or distracted) it can help you by not enforcing you to think your code through, and it enables you to make some kind of progress despite of that. (Whether that is a good thing or a bad thing - I'm not sure, but for juniors I can accept it as a viable alternative.)
But the problem is that if you _only_ do TDD then you will be stuck being a poor programmer, because you will never develop the mental skills to design and hold code structures in your head beyond the trivial size. Beyond a certain point of maturity in your professional career you will just have to move past that, and then you will realize you have no use for TDD in 99% of the cases, it doesn't help you, in fact it slows you down and makes your code quality poorer.
That's an edgy thing to post, tbh, in terms of how HN readers will take this.
I think there are 2 arguments for TDD:
1. For at least some bits of your code, you would benefit from having tests. Writing tests after the fact is a pain, so might as well start with them.
2. Even for highly skilled programmers who enjoy thinking about their code deeply as they write it, there is still a class of difficult algorithms where they would benefit by writing the tests first. I'm thinking about stuff like tricky graph algorithms or scientific computations etc.
That being said, I generally have found TDD to be too onerous - except for maybe 1% of the time when I do find the extra safety brought by the tests to be useful.
> This post seems to support my hypothesis about TDD: it is for the poor programmers
If there’s one thing I’ve learned after decades in the industry, it’s that we are all poor programmers. Our squishy wet human brains are simply not a good match for what computers do, and as such, we should take all the help we can get :)
I think it's more that TDD is great for testing the kinds of things juniors are more capable of writing.
Once you're dealing with multiple systems, or outputs dependant on the combination of an input and a complex state, unit tests aren't as easy to write or as useful.
This is maximum Dunning–Kruger effect right here. This is how you end up with cowboy code nobody but you can debug. You'll spend all day bragging to everyone on Twitter about how you can put "big data structures in your head" meanwhile you're spending hours manually testing things that only require 10 seconds of the full test suite. Then over the next 2-3 years, random things break between releases unless you're QA team is doing hours of regression tests all the time.
Once I read an article about the idea that one should develop an app without even starting it in dev env, but use only automated tests. Maybe that's a bit too far, but dealing with a backend app that takes minues to start, automated tests are a godsend and I always advocate them to younger coworkers.
TDD is an advanced pattern which isn't suitable for junior developers. I feel that's the main reason why TDD fails to gain momentum in many teams because those starting out still need to learn many other things before being mature enough to start with this approach
I disagree. I started learning programming seriously on my own, using Ruby. And TDD/testing was built into most sources I could find (books and courses, not random blog posts) so I started doing it almost right away. It’s paid huge dividends ever since and it’s not rare for me to join a larger corporate/enterprise type of environment and be the only one who seems to even be aware of automated testing as a concept.
I've managed a few teams so far and made my observations about developers who didn't learn testing from the ground-up. TDD is not something i would describe as an intuitive concept, especially in the context of business requirements.
+100 on this article. I observed the same in two stages of my life.
1. When I became more senior scientist - scientists usually write crappy code - but once I had a lot of other stuff like coaching, strategic planning etc on my plate I was only able to code in short bursts. Making sure I can trust my code and don’t need to wait for later stages increased my code outpu even though I was coding less time. Similar when I became a manager who loves to add code from time to time.
2. Totally in line with OP, once I got kids, I started to use TDD extensively even in my hobby projects.
The tests were giving me confidence on the one hand and at the same time the documented how to use my stuff - which is important if your coding is spaced over weeks without any coding at all where life is keeping you busy.
So it was for me indeed also triggered by constraints to significantly improve my productivity
I have a feeling TDD proponents work in very different code bases than I do, the point in this article is spot on that TDD emphasizes nice to have unit tests over the much trickier but more bug prone integration tests. For frontend dev the surface area of things that can go wrong is much higher than say in programming an api, so just writing the tests for a lot of changes seem much more difficult than writing the code itself, since you won’t know what can go wrong until you build it. A lot of frontend programming is throwing something on the DOM and then trying to understand the weird ways it went wrong, like a scroll bar width is slightly different in every browser, or the line height for a text input behaves unexpectedly if you change a font. I have no idea how you would build these features with TDD. Seems like the more formal your codebase the more useful it is.
Unless you work on a big company where you're just a small cog in the machine and your work doesn't matter much, you will be defeated and destroyed by folks who move much faster at building and fixing things in this rapidly changing world. By the way Android OS wouldn't exist , if the amazing team have used TDD rather than just focusing on building good quality product and moving really fast
Nah. I've done a startup where we did TDD and pair programming (with frequent pair rotation) for all production code. We were incredibly productive. High code quality and good tests mean you get to spend way less time on mysteries and bugs. Aggressively refactoring the system means you end up with flexible, composable units that make it easier to keep up with rapid changes.
It's basically the same deal as cooking. Can I go faster if I dirty all the pans and don't worry about cleaning up? For a little while! And after that, productivity and quality drop. There's a reason that professional chefs are big on "working clean". For those interested, there's a book with a lot good interviews with chefs on their work practices. I think a lot of it translates to software: https://www.workclean.com/
The Android OS is OK from my personal experience. I wouldn’t brag about it. Now Android App development is by far one of the worst experiences I’ve had the misfortune of having, and I repeatedly thought how everyone involved with the design of those APIs ought to be fired. Maybe they should have been using TDD!
When things scale in complexity, TDD is the reason why people can move fast. I suggest reading about extreme manufacturing. TDD is not only for software.
Our company is intentionally small, we have been around for over 10 years, we move fairly fast to address needs in our market segment, and we have been doing mostly TDD for years.
I highly recommend writing automated tests to guide what you do even for small companies. Perhaps especially for resource constrained ones, when every hour spent working is hard earned.
> Specifically, it's a test first methodology, where […]
I am very confused by now, since I've heard one vocal camp saying TDD=TFD, and another vocal camp saying it most definitely is not, and that TFD is in fact the black sheep of TDD and should mostly be avoided.
What do people here think? (and if TDD is not necessarily TFD, then what is it exactly?)
Factor in the additional overhead of TDD and let the boss/team decide if TDD is to be used. Even if they agree on the extra 3 days on account of TDD, I would guess that people will use it for items without demanding timelines; crunch-time stuff would be exempt.
This is just insanely wrong-headed. Tests with no code are always negative value, the time it took to write the tests. Code with no tests might also be negative value, but it at least has the possibility of having positive value.
Tests are great and TDD is great for some people, but the whole point is to write useful software. It's important not to lose sight of that.
They may have negative value, but it is not a foregone conclusion. Tests are merely documentation and that documentation very well could provide positive value for someone who reads it. There is no doubt insights to be gained in reading about what someone was once thinking about a particular problem. If well thought out, you might even begin implementation based on that work, saving time having to document it all over again.
Just don't be dogmatic about practices like TDD. It's always a tradeoff. Being a good engineer means understanding when the tradeoff is worth it and when it isn't. Be wary of development teams, or people, that force you to practice certain methodologies regardless of context.