Unit tests have a purpose, which is mostly to protect the programmer against future mistakes. Integration and system tests protect the user against current mistakes.
I've been on projects that focused almost exclusively on unit tests and on projects that focused almost exclusively on integration tests. The latter were far better at shipping actually working code, because most of the interesting problems occur at the boundaries between components. Testing each piece with layer after layer of mocks won't address those problems. Yay, module A always produces a correct number in pounds under all conditions. Yay, module B always does the right thing given a number in kilograms. Let's put them together and assume they work! Real life examples are seldom this obvious, but they're not far off. Also note that the prevalence of these integration bugs increases as the code becomes more properly modular and especially as it becomes distributed.
I firmly believe that integration tests with fault injection are better than unit tests with mocks for validating the current code. That doesn't mean one shouldn't write unit tests, but one should limit the time/effort spent refactoring or creating mocks for the sole purpose of supporting them. Otherwise, the time saved by fixing real problems more efficiently - a real benefit, I wouldn't deny - is outweighed by the time lost chasing phantoms.
Unit tests protect you against current mistakes. They're tied to the exact implementation.
"Right now my function X should call Y on it's dependency Z before it calls A on it's dependency B.
I know that my method should do this, because this is how I designed it now.
Let me write a test and expect exactly that."
Integration and unit tests will tell you whether in the future your code will still work when you refactor.
"Okay, we rewrote the whole class containing the function. Does running my thing still end up writing ABC into that output file?"
If unit tests are tied to an exact implementation, they''ll fail on correct behavior and that's definitely wrong. It shouldn't matter whether X calls Z:Y or B:A first, whether it calls them at all, whether it calls them multiple times, whether it calls them differently. All that matters is that it gets the correct answer and/or has the same final effect.
Unit tests should be based on a module's contract, not its implementation. This is in fact exactly what's wrong with most unit tests, that they over-specify what code (and all of its transitive dependencies) must do to pass, while by their nature leaving real problems at module interfaces out of scope.
a) Most code in the wild doesn't have an explicit output and instead is orchestration code.
b) Even if you have an output, it's dependent on more complex input of arbitrary types.
Assume that there's a method that returns an input based on summing the output of a method call of it's abstract dependencies.
To do dogmatically correct unit testing you'd pass those 2 mocked dependencies, and have those methods return the values when the right method is called on them.
Then you'd assert that B was called on A, that D was called on C, and that the method under test returns the sum of those returns.
As soon as you move into passing implementations of those 2 dependencies, to anyone dogmatic you're doing integration testing.
Even if the tester isn't being dogmatic, in a lot of cases these inputs are complex enough that building enough actual inputs that are consistent and realistic to cover all the cases is prohibitively costly, so they opt for mocks.
Now, suddenly you just have more code to maintain when making changes, but you feel good about yourself.
The interface on our object (O) that you are describing is:
O -> int
Your unit test is concerned with narrowing the interface above to:
O -> int // of specific value based on dependencies
If Os only dependencies are A and C, this can be rewritten to:
A -> C -> int // of specific value
Of course if we assume both A and C, themselves, have dependencies we can recursively rewrite the above until we have a very long interface, but instead you have opted to mock (M) them:
M(A) -> M(C) -> int // of specific value
You then take it a step further and mock the method calls on each to return a specific value:
M(A) -> int
M(B) -> int
becomes:
M(A) -> 3
M(B) -> 5
Okay. Now we can rewrite our interface to:
3 -> 5 -> int // of specific number
and our test to:
3 -> 5 -> 8
and make our assertion that the result is indeed the sum of the inputs (not to mention the ridiculous assertions that specific methods were called within the implementation). Yikes... No wonder OOP gets a bad wrap. All that for what amounts to a `sum` function.
The designer of the above monstrosity could learn a lot from the phrase "imperative shell, functional core". It sounds like dogma until you are knee deep in trying to test the middle of a large object graph!
No, unit tests aren't tied to the current implementation, they're tied to the current interface. If your programming interface calls for multiple interdependent objects without central coordination, then yes, you should test that. But I would say that you've already started out with code that is too badly structured to allow for testing the units in isolation: you should be able to unit test A without relying on Z at all.
It's integration testing that validates that all your units still combine (integrate) into a working end product. That's not about testing your implementation nor your internal interfaces, that's about testing your program's inputs and outputs.
All tests protect the programmer against future mistakes. All tests are a protection against regressions.
But yes agreed, integration tests absolutely carry much more value than any unit tests might. Specifically because units tests tend to target things that are essentially implementation details.
The only time I'd say unit tests carry any value is if they're testing some especially important piece of business logic e.g. some critical computation. Otherwise, integration tests rank the highest in the teams I lead.
I've been on projects that focused almost exclusively on unit tests and on projects that focused almost exclusively on integration tests. The latter were far better at shipping actually working code, because most of the interesting problems occur at the boundaries between components. Testing each piece with layer after layer of mocks won't address those problems. Yay, module A always produces a correct number in pounds under all conditions. Yay, module B always does the right thing given a number in kilograms. Let's put them together and assume they work! Real life examples are seldom this obvious, but they're not far off. Also note that the prevalence of these integration bugs increases as the code becomes more properly modular and especially as it becomes distributed.
I firmly believe that integration tests with fault injection are better than unit tests with mocks for validating the current code. That doesn't mean one shouldn't write unit tests, but one should limit the time/effort spent refactoring or creating mocks for the sole purpose of supporting them. Otherwise, the time saved by fixing real problems more efficiently - a real benefit, I wouldn't deny - is outweighed by the time lost chasing phantoms.