>The author isn't saying that long-running tests aren't valuable, though.
They kind of are though, since they don't acknowledge the trade offs inherent in reducing the duration of the integration tests.
>In most cases I've seen, slow tests happen because nobody bothers optimizing test code
IME they typically they come about because realism comes with a price tag. An actual UI is more expensive to load than a mock, an actual webserver is slower than a fake web server, a postgres database spins up slower than an SQLite, a real API call takes longer than a mocked one, etc.
> IME they typically they come about because realism comes with a price tag. An actual UI is more expensive to load than a mock, an actual webserver is slower than a fake web server, a postgres database spins up slower than an SQLite, a real API call takes longer than a mocked one, etc.
Sure, but spinning up a postgres database or running a Selenium test should change your test times from being executed in seconds to minutes, not seconds to hours or days.
> They kind of are though, since they don't acknowledge the trade offs inherent in reducing the duration of the integration tests.
As I say in another comment in this thread, there is very, very, very rarely a legitimate reason for integration tests to need multiple hours to run. Even in the worst case, you can just parallelize the tests on different hardware; if you can't parallelize, that is a problem with your tests and not a real excuse.
I agree that parallelization is often a good way to speed up your CI pipeline, but the OP's suggestion of shifting integration tests to unit tests is frequently the worst thing you can do because it very often changes a test that tests into a test that is just a pale imitation of the code under test - a mimetic test.
I worked on a big old ball of mud once where business logic was smeared across 6 or 7 different microservices. The ONLY way to gain confidence that the app did the right thing was to test it end to end which, of course, took ages, and meant hours long CI pipeline (longer if not parallelized).
My coworker tried to replace some of those tests with "faster" unit tests around bits of those microservices but all they did was "lock down" the horrible architecture such that it became impossible to refactor and consolidate the business logic without breaking a bunch of unit tests.
They kind of are though, since they don't acknowledge the trade offs inherent in reducing the duration of the integration tests.
>In most cases I've seen, slow tests happen because nobody bothers optimizing test code
IME they typically they come about because realism comes with a price tag. An actual UI is more expensive to load than a mock, an actual webserver is slower than a fake web server, a postgres database spins up slower than an SQLite, a real API call takes longer than a mocked one, etc.