Indeed, the Fibonacci sequence is in the original benchmark and Fib(15) is not benchmarking big integer performance. It should have the same characteristics as the always-zero function.
I think gp's point is that bif[n_] == 0 for all n_. A 'smart' optimser would recognise this and so the time to compute would be (a) constant irrespective of the value of n_, and (b) instantaneous because the function call can be re-rewritten as the constant value 0.
Understood, and my point is that user Someone suggested "bif" as a way to avoid testing bigint performance; but fib[15] does not test bigint performance.
So using bif is unnecessary and potentially harmful if it's optimized to zero.
I suspect this benchmarks begint libraries more than term rewriting. A way to test that may be:
You can check that neither tool is smart enough to solve that to by comparing running times for different large limits.