Yes I agree as well. Our company uses Spring to write banking software and there is rarely a case that involve purely logic that can be separated from its dependencies. I used to try isolating code into separate methods that took no dependencies but it just made the code harder to read. Now we just test invoking the grpc endpoints and include the db (with rollback) and it works quite well.
The problem is that doBusinessLogic(a) is often entirely about transforming a into whatever the current DB accepts. Sure, you can write a test to check that b.Field_old == a["field"] , but this buys you very little. The real question is whether you should have mapped a["field"] or a["oldFields"]["Field"] to b.Field_old, and your unit test is not going to tell you that, you need an integration test to actually verify that you made the right transformations and you're getting the correct responses.
By all means, if the transformation is non-trivial, and it is captured entirely in the logic of this method, not in the shape of the API and the DB, then you should unit test it (e.g. say you are enforcing some business rules, or computing some fields based on othee fields). But if you're just passing data around, this type of testing is a waste of time (you don't have reasons to change the code if the API or DB don't change, so the tests will never fail), and brittle (changes in the API or in the DB will require changing both the code and the tests, so the tests failing doesnt help you find any errors you didn't know about).
> The real question is whether you should have mapped a["field"] or a["oldFields"]["Field"] to b.Field_old, and your unit test is not going to tell you that, you need an integration test to actually verify that you made the right transformations and you're getting the correct responses.
So I would argue you don't actually have business logic then. Your service is anemic, and you have a data transformation you need to do. I definitely think that you should do an integration test for that.
Moving JSON -> Postgres or whatever is something that you absolutely still can test with the output of the DML statement by your DB library. It may be a silly test, but that's because if there's no business logic, it's a silly program _shrug_.
While it's bad form to reply to your own post, I might add this is just what a function is in the large, but you're viewing your program this way.
a <- readFromApi ( Input x )
b <- doBusinessLogic(a) ( f(x) )
c <- writeToPersistence(b) ( Output y = f(x) )
You can also imagine that there are more than one lookup from the db or service calls as I/O in different parts of the pipeline (g(f(x) etc.), but it's always possible to have state pulled in explicitly and pushed down explicitly into business logic as an argument. It tends to make programs have flatter call stacks as well.
I agree very strongly with this but a lot of people will be very unhappy with this idea.