Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a very controversial conclusion to this : some projects ought to have zero or close to zero unit tests.

I agree very strongly with this but a lot of people will be very unhappy with this idea.



I agree with this, I also admit that 95% of the code I write is plumbing code. (There is an art to making your plumbing nice).


Yes I agree as well. Our company uses Spring to write banking software and there is rarely a case that involve purely logic that can be separated from its dependencies. I used to try isolating code into separate methods that took no dependencies but it just made the code harder to read. Now we just test invoking the grpc endpoints and include the db (with rollback) and it works quite well.


I would suggest making the business logic stateless methods ("functions") that take data records that are immutable passed between it.

That allows strict separation of all I/O from testable business logic.

If you can't separate pure logic from your I/O, it means you have a Russian-doll program that looks like:

readFromApi {

  doSomeBusinessLogic{

    writeToPersistence{

      ...
Instead of a pipeline like:

a <- readFromApi

b <- doBusinessLogic(a)

c <- writeToPersistence(b)

If you do things this way, you can always isolate your business logic from your dependencies.


The problem is that doBusinessLogic(a) is often entirely about transforming a into whatever the current DB accepts. Sure, you can write a test to check that b.Field_old == a["field"] , but this buys you very little. The real question is whether you should have mapped a["field"] or a["oldFields"]["Field"] to b.Field_old, and your unit test is not going to tell you that, you need an integration test to actually verify that you made the right transformations and you're getting the correct responses.

By all means, if the transformation is non-trivial, and it is captured entirely in the logic of this method, not in the shape of the API and the DB, then you should unit test it (e.g. say you are enforcing some business rules, or computing some fields based on othee fields). But if you're just passing data around, this type of testing is a waste of time (you don't have reasons to change the code if the API or DB don't change, so the tests will never fail), and brittle (changes in the API or in the DB will require changing both the code and the tests, so the tests failing doesnt help you find any errors you didn't know about).


> The real question is whether you should have mapped a["field"] or a["oldFields"]["Field"] to b.Field_old, and your unit test is not going to tell you that, you need an integration test to actually verify that you made the right transformations and you're getting the correct responses.

So I would argue you don't actually have business logic then. Your service is anemic, and you have a data transformation you need to do. I definitely think that you should do an integration test for that.

Moving JSON -> Postgres or whatever is something that you absolutely still can test with the output of the DML statement by your DB library. It may be a silly test, but that's because if there's no business logic, it's a silly program _shrug_.


While it's bad form to reply to your own post, I might add this is just what a function is in the large, but you're viewing your program this way.

a <- readFromApi ( Input x )

b <- doBusinessLogic(a) ( f(x) )

c <- writeToPersistence(b) ( Output y = f(x) )

You can also imagine that there are more than one lookup from the db or service calls as I/O in different parts of the pipeline (g(f(x) etc.), but it's always possible to have state pulled in explicitly and pushed down explicitly into business logic as an argument. It tends to make programs have flatter call stacks as well.


The people that place a controversy on this can be safely ignored.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: