Hacker News new | past | comments | ask | show | jobs | submit login

Well, I am also in the reading group, and I was very happy to see this pop up here.

But, all these method, in practicality, leads to integer overflows.

Works very well for smaller Fibonacci numbers, but not for, say, the 1500th Fibonacci number.

That is very easy and practical to do instead with the naive formula using an array based approach.

    fibs = [0, 1]
    for i in range(2,n):
        fibs.append(fibs[-2] + fibs[-1])
That's the core.

I have been meaning to write it up, but dealing with a bad flu.

The closed form leads to approximation errors.

I tried Julia, Numpy, Numba, Jax, Rust, but integer overflows are there in any parallelized approach.

The naive implementation with LRU caching is the fastest and most reliable so far.

I am surely missing something, you can actually use some things to overcome integer overflows, right?

I hoped to see those in this article, but they were simply not there.

Can anyone else help?

Even UInt128 is not enough for 1500th Fibonacci number. The upside of Rust, is that it doesn't fail silently. The compiler is really helpful.

Jax, Julia, Numpy confidently give wrong answers. Only way to get the feel is to eyeball a bunch of entries and check for negative numbers. Very easy to spot overflows when you are at, say, 1600th fibonacci, but one can feel confident on wrong answers on smaller numbers.




Python has arbitrary precision integers, so you could do it without running into overflows. Although if implementing floating point based technique, you may want to use np.float128 to handle values upto 10^4932.


> you may want to use np.float128 to handle values upto 10^4932.

I'm not sure the maximum value matters, since I think it only actually gives you 33 decimal digits precision and the 1500th number has over 300 digits.

Although it looks like you'd need to be more careful with it too:

https://numpy.org/doc/stable/user/basics.types.html

> np.float96 and np.float128 are provided for users who want specific padding. In spite of the names, np.float96 and np.float128 provide only as much precision as np.longdouble, that is, 80 bits on most x86 machines and 64 bits in standard Windows builds.


My bad. I should have looked into the docs. Thanks for pointing out!


There are "bignum" implementations for every language. Though I never tested the performance impact on closed-form Fibonacci when a defined double precision >64bit is used.

https://gmplib.org/

Your code has an accidentally quadratic runtime (instead of linear). Since the array is appended to, the code regularly increases the memory region and has to move all the previous data over.

You could pre-allocate the memory as n is known ahead. Also you don't need all that memory. You only need to store the last two entries.


Calculation of such large numbers are usually handled by BigInt libraries that are more or less only limited by memory. In Rust that would be the uint crate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: