Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For very simple billing, arbitary precision sounds like overkill, as do rounding, and order of operations.


Oh, sure, because when you are a small company you don't care that people get correct invoices.

I can certainly sympathise with this stance. There is about a billion things you can do better but you have limited time to do anything so you have to prioritise. And if one invoice in ten thousand is incorrect by one cent, and only one client in ten thousand who received the wrong invoice will actually find it out, then it is hard to argue you should be spending time on fixing this one problem.

Just don't say you can do accounting correctly on floats and we will remain friends.


You can make the same arguments against fixed precision decimal types. My systems represent currencies to 4 decimal places. At that level of precision, rounding/order of operations errors could accumulate much faster than with a 64 bit float.

Decimals are still the way to go, you just have to pick a level of precision acceptable for your application.

My management definitely does not want me spending my time chasing errors over fractions of a pennies. The only time those errors are discovered is when I compare the output of new code against old code.


Let me guess, the last 10 times you had to move jobs it was because of a difference in opinion with your boss about the importance of correcting one-cent-errors in one invoice out of every ten thousand?


Haha... no. But I may be focusing way more towards the reliability than 99.99% or so developers.

The way I solve this problem isn't by constantly hopping projects. I try to find projects that actually require extreme reliability so that I can be doing what I want in an environment where there is a business case for it.


For simple accounting I've always used integers and done all operations in cents, only converting on the frontend. what's my downside here? I guess it wouldn't support unit prices less than a penny


If you have different currencies you need to keep track of the number of decimals used, e.g. YEN has 0 decimals, bitcoin has 6, etc. It could even change over time like Icelandic ISK did in 2007. If you have different services with different knowledge about this you're in big trouble. Also prices can have an arbitrary number of decimals up until you round it to an actual monetary amount. And if you have enough decimals, the integer solution might not have enough bits anymore, so make sure you use bigints (also when JSON parsing in javascript).

Example in js: Number(9999999.999999999).toString() // => 9999999.999999998

And make sure you're not rounding using Math.round

Math.round(-1.5) // => -1

or toFixed

(2090.5 * 8.61).toFixed(2) // => 17999.20 should have been 17999.21 8.165.toFixed(2) // => 8.16 should be 8.17

The better solution is to use arbitrary precision decimals, and transport them as strings. Store them as arbitrary precision decimals in the database when possible.


Also many types of operations could give you the wrong result from incorrect rounding. E.g. let's say you're calculating 10% of $1.01 ten times and adding the result together. The correct result is $1.01, but with your method you will get $1.00.


The correct answer will depend on the specifics of your environment. In some places, tax is calculated per line item. If you go to a dollar store and buy 10 items with 7.3% sales tax, it adds up without those 0.3¢ bits. In other places, the tax is supposed to be calculated on the total for the tax category in the sale. If you wanted to keep it by line item you'd need the extra digits of precision.


Well, yes, which is why you need to be in control of your rounding and not just let the width of data type you chose for the implementation dictate that.


I enjoyed Mark Dominus's blog post [0] about the billing system he cowrote, moonpig. It restates much of the other responses, namely that ignoring infinitesimal errors/rounding would have instilled a culture of - at minimum - doubt. Perhaps another way to see this is to look at a visualization [1] of the discontinuous coverage that floating point gives to the numbers we want to represent.

[0] https://blog.plover.com//prog/Moonpig.html#fp-sucks

[1] https://observablehq.com/@rreusser/half-precision-floating-p...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: