Are you referring to snapshot testing[0][1]? i.e. you first "snapshot" the output of the function and commit it to VC, and each test run will run the same input and compare it against the "snapshot", failing and giving a diff if it differs.
I'm about to try it soon, seems like a good ROI as you said.
I think I am! Through convergent evolution at least - I hadn't found essays explicitly advocating it at the time.
My use case was comparing results from chains of Spark RDD & Dataframe transformations, so having fairly large realistc input/output datasets was part of the game, and the main reason that manually writing all expected results wasn't feasible.
I'm about to try it soon, seems like a good ROI as you said.
[0] https://jestjs.io/docs/en/22.x/snapshot-testing [1] https://github.com/mitsuhiko/insta