Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Certainly Mathematica’s term rewrite loop is optimized to death, and I only spent an hour or two making the most basic optimization

I suspect this benchmarks begint libraries more than term rewriting. A way to test that may be:

  bif[1] := 0
  bif[2] := 0
  bif[n_] := bif[n-2] + bif[n-1]

  Timing[Do[bif[15], 1000]]
You can check that neither tool is smart enough to solve that to

  bif[n_] := 0
by comparing running times for different large limits.


Fib(15) is just 610, so no big int involved.

As a quick double-check, fib(n)< 2^(n-1) and 2^14 is 16384.


This is not the Fibonoacci sequence because the first two terms are 0 and hence the entire sequence is 0.


Indeed, the Fibonacci sequence is in the original benchmark and Fib(15) is not benchmarking big integer performance. It should have the same characteristics as the always-zero function.


I think gp's point is that bif[n_] == 0 for all n_. A 'smart' optimser would recognise this and so the time to compute would be (a) constant irrespective of the value of n_, and (b) instantaneous because the function call can be re-rewritten as the constant value 0.


Understood, and my point is that user Someone suggested "bif" as a way to avoid testing bigint performance; but fib[15] does not test bigint performance.

So using bif is unnecessary and potentially harmful if it's optimized to zero.


Apologies. Miss-interpreted the ‘1000’ while skimming the code.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: