I used to take this approach (didn’t worry about late interpolation for debug logs unless in a tight loop), but I was wrong and got bitten several times. Now I very strongly tend to favor lazy interpolation for logging.
The reason is that focusing on formatting CPU performance misses a bigger issue: memory thrashing.
There are a surprisingly large number of places in the average log-happy program where the arguments to the logger format string can be unexpectedly large in uncommon-but-not-astronomically-improbable situations.
When the amount of memory needed to construct the string to feed down into the logger (even if nothing is done with it due to log levels) is an order of magnitude or two bigger than the usual less-than-1kb log line, the performance cost of allocating and freeing that memory can be surprising—surprising enough to go from “only tight number crunching loops will notice overhead from logger formatting” to “a 5krps fast webhook HTTP handler that calls logger.debug a few times with the request payload just got 50% slower due to malloc thrashing”.
As said I default to deferred %. That leaves judgement for need vs cost of call vs frequency, etc. Also f-string is faster than others with own via instruction, more readable, less likely to throw a runtime error. Judgement call.
Bottom line, still a micro optimization on a modern machine in the vast majority of cases.
If you have a truly costly format op and still want to use fstring, look into .isEnabledFor().
The reason is that focusing on formatting CPU performance misses a bigger issue: memory thrashing.
There are a surprisingly large number of places in the average log-happy program where the arguments to the logger format string can be unexpectedly large in uncommon-but-not-astronomically-improbable situations.
When the amount of memory needed to construct the string to feed down into the logger (even if nothing is done with it due to log levels) is an order of magnitude or two bigger than the usual less-than-1kb log line, the performance cost of allocating and freeing that memory can be surprising—surprising enough to go from “only tight number crunching loops will notice overhead from logger formatting” to “a 5krps fast webhook HTTP handler that calls logger.debug a few times with the request payload just got 50% slower due to malloc thrashing”.