Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Specialising in financial software, over the past 2 decades I have been fighting over this with countless people, teams and companies.

For accounting you should only ever use arbitrary precision math library with ability to specify rounding rules. If your programming environment/language does not have one, it is unsuitable to be used for accounting, billing, invoicing, payments, etc.

Having the underlying library is, of course, not enough. You also need to be able to properly store (data structures, databases), transfer (wire formats, data structures), process (order of operations, rounding rules), present (UI toolkit) and so on.

I have never in my life joined a software project for any organisation that was able to do basic arithmetic on money correctly (and I worked for companies ranging from small startups just needing invoices and very simple billing to risk management departments for largest financial institutions on Earth processing trillions of dollars on a daily basis).



The article is about data types for storage, not for intermediary values used as part of calculations, though. Are you proposing that everybody is storing monetary values "wrong", too?

And as a meta-point:

> I have been fighting over this with countless people, teams and companies. [...] I have never in my life joined a software project for any organisation that was able to do basic arithmetic on money correctly [...]

Are you absolutely sure that you are the only person that understands how to do accounting arithmetics on computers correctly?

My guess would be that the status quo is a combination of a lot of legacy code and procedures, but more importantly of differing priorities.

Maybe you value arithmetic correctness over simplicity of procedures (sometimes these need to be published in regulatory texts or even laws) or compatibility with other entities and their procedures much more than the industry average?


A shocking number of people (edit: who implement billing related software) are unaware of how many decimal points of accuracy their local tax code requires to calculate vat or sales tax correctly. And those things are often specified in terms of arithmetic correctness.

I had our CFO stand behind me while I talked him through every step of our VAT calculations once, because he was legally responsible if I made us round it wrong, to the wrong number of digits. And had we e.g. done something grossly incompetent like used floats for those calculations it most certainly would have been wrong, but so would it if I used fewer than five digits past the decimal point or failed to round in the right direction after that.

It's usually not hard, but it requires being aware that you need to look up the right rules. And know better than using floats.


A shocking number of people who work with software that handles money are just winging it. I worked on a project once that handled payment processing functionality for other products at the company (it was a B2B SaaS where clients could take payments through our software). We also handled payments related billing, since the clients would owe fees to us and to our payments gateway for their transactions.

The payments gateway we worked with calculated their fees to a thousandth of a cent, but of course we could only bill whole cents. So basically every billing period there would be a < $0.01 balance due that carried over to the next bill, and every so often the carryover would add up to a full cent that needed to be charged. When we implemented our MVP we (engineering) explained this to the product and business teams, and it blew their minds. Our suggestion was to have a tooltip on the 1 cent charge with a link to a help article explaining how the accounting worked, but they were strongly against it and had us list the 1 cent as something like "other fees" with no explanation. They seemed convinced it was a thing that would happen only rarely, even as we were telling them otherwise. Anyway, that 1 cent charge just infuriated clients for some reason, and every month or two we'd get bug reports about it or requests to explain why it kept showing up. Fun times...


Why was it listed separately at all?

If the month's charges were 406.783228 and I get a bill for 406.79 then that seems perfectly good.

If I get a bill that says 406.78, plus a separate 0.01, that's weird.


> Why was it listed separately at all?

Because the bill included a detailed breakdown of all the fees by type and transaction. Typically our clients were charged a fixed monthly fee, a fixed authorization fee charged every time a payment was attempted (even if it was declined), and a fee applied to successful payments that was a percentage of the payment amount. There were other fees for things like processing chargebacks, but IIRC they were the same for everyone.

> If the month's charges were 406.783228 and I get a bill for 406.79 then that seems perfectly good.

Yeah, we definitely weren't allowed to round up and keep the change. I can't claim to know all the details involved but I suspect that doing so would've at least violated our contract with the payments gateway. Might've actually been illegal.

> If I get a bill that says 406.78, plus a separate 0.01, that's weird.

That's not how it worked. For the sake of simplicity let's assume that your activity is always the same, therefore you have new charges totaling exactly $406.783228 every month.

* Month 1: You owe $406.783228, your bill is $406.78, a balance of $0.003228 rolls over.

* Month 2: You owe $406.786456, your bill is $406.78, a balance of $0.006456 rolls over.

* Month 3: You owe $406.789684, your bill is $406.78, a balance of $0.009684 rolls over.

* Month 4: You owe $406.792912, your bill is $406.79, there's a $0.01 "other fee" line item on the bill, and a balance of $0.002912 rolls over.


It's unlikely that it would be illegal, or violate the contract with the payments gateway, if you charged the customer 406.78, and you made up the $0.003228 discrepancy so the gateway is paid off.


I wasn't talking about rounding up. I meant exactly the same thing you're saying about month 4.

> Because the bill included a detailed breakdown of all the fees by type and transaction.

Was it all rounded [down] to the nearest penny?

That type of bill can already fail to add up to the total very easily, like x.xx4 + x.xx4 + x.xx4. So I'm still not sure why there was a need to have a line item to explain this single penny.

Was there only a single charge on each bill that used fractional pennies? So that this was the only time that things wouldn't add up perfectly?


Canada abolished the penny some years ago. If some retail item in a store costs $2.03 and you insist on paying with cash, you will have to use a nickel and pay $2.05.

Yet, life goes on; nobody goes to jail.


Generally such rules are only about cash, though.


> Anyway, that 1 cent charge just infuriated clients for some reason, and every month or two we'd get bug reports about it or requests to explain why it kept showing up.

There is an obvious alternative here, which is to just absorb that fraction of cent, so the customer doesn't see it. Handling bug reports and infuriated clients costs more.

You could even just round up to the penny and ding the customer; if there is a 0.3 cent fraction coming from the payments gateway, turn it into a customer-facing penny, thereby collecting an extra 0.7 cents.

That way, too, there would almost certainly be zero complaints and bug reports.

If the customer is supposed to pay $103.45395 every month, you could turn it into $103.46, pocketing an extra $0.00605, or into $103.45, where you're out $0.00395.

Think about it; when you eat at a restaurant, often the prices for a meal are round like $11.50, even though the restaurants expenses are down to the penny. Why is that? Because they arbitrarily set the price. They don't say, oh, our ground beef supplier charges to the penny penny, so lunch will have to be $11.57.

Oh, the business teams found the approach puzzling---but what do they know, right? If they were smart, they would be software engineers.


> Oh, the business teams found the approach puzzling---but what do they know, right? If they were smart, they would be software engineers.

Well in this case the business teams selected the 3rd party payments gateway that the company would work with, negotiated the contracts with them, and worked with the 3rd party to set up how the customers would be charged. They and/or the product team determined that we'd use the 3rd party system to handle the billing because building it in house wouldn't generate any new revenue. They weren't stupid, but they choose an approach to payments processing that was pretty low level (because it generated more revenue, of course), without anyone at the company having a good understanding of low level payment processing details. The engineers learned because we kind of had to, but three years into the project (when I left) business/product would still routinely struggle with the details.

So anyway, WRT to billing, the task handed to engineering was: pull the billing detail from the 3rd party's API and assemble it into a statement that can be handed to a client. We had zero control over how the charges were generated or applying any rounding to the total.


If someone had a gun to my head saying, don't hide the sub-penny slices from the billing, I would just do the billing in thousandths of a cent, rather than make "leap cents" appear every couple of bills:

  Amount owing: $123.45678

  Please pay one of: $123.46 (a credit  of $0.00322 will be applied)
                     $123.45 (a balance of $0.00678 will carry forward)
On the next statement, if they paid $123.46:

  Previous balance: (  $0.00322) [credit]

  New charges:        123.45678  { here we have a detailed breakdown }

  Amount owing:       123.45356

  Please pay one of: $123.46 (a credit  of $0.00644 will be applied)
                     $123.45 (a balance of $0.00356 will carry forward)
etc.

That's literally "put the billing detail from the 3rd party API and assemble it into a statement". Since the billing detail from the 3rd party API is in thousandths of a cent, then that implies the statement must have thousandths of a cent.

If the two payment options were determined to be too confusing, one of the two could be dropped.


> I would just do the billing in thousandths of a cent, rather than make "leap cents" appear every couple of bills

Again, that was not something we could control.


I think that's a case for just dropping that cent. As long as it's your own money and not tax you're underreporting it's not a problem as long as it's consistent.


Carrying the remainder microcents as a bill seems overkill and not necessarily correct. Accounting regulations and accepted practices have well established rules on rounding, like bankers rounding (1).

You could probably treat the extra microcents as being on the next pay period. Though that's annoying as if I close an account I'd expect it to be paid in full, not a few microcents remaining.

1: https://stackoverflow.com/questions/45223778/is-bankers-roun...


>A shocking number of people are unaware of how many decimal points of accuracy their local tax code requires to calculate vat or sales tax correctly.

What is your jurisdiction? In Canada, I can't for the life of me imagine the CRA would remotely care about decimal-point accuracy. In fact, most of their online forms explicitly remove the decimals.


UK. The rules may have changed now, it's a long time since I implemented the rounding rules here, but the last time I did it required 5 decimals accuracy. The rules also used to specify how you needed to account for line items vs. sub-totals in your invoices to ensure you didn't find any "workarounds" to shave off some pennies of tax (In fact, the last time was while the tax authority was still called the Inland Revenue, which it hasn't for years.)

For aggregate totals of your VAT liability across your total set of invoices, you'd be fine with rounding up to the nearest pound, to the Inland Revenue's benefit. For individual invoices however, you were required to stick to very specific rounding rules.


> For aggregate totals of your VAT liability across your total set of invoices, you'd be fine with rounding up to the nearest pound, to the Inland Revenue's benefit

On personal tax forms you have to round in the taxpayer favour. If your income is 12345.67 you round it to 12345. If your expense (say giftaid) is 12345.67 you round it to 12346.

Surprised it's the other way with VAT, but then I do very little with tax other than click a few buttons and confirm "yes, you have to tax me as I have children".


I'm not sure it's the other way with VAT, as I said it's been many years and the rules may well have changed multiple times.

The key, though, is that with taxes, if at all unsure you've got the rules right, the safest option is to round in the tax offices favour.

It's in general a lot less painful to explain an overpayment than underpayment if something is broken.

Of course, better yet, get it right.


That's for final amounts though, right?

I bet they care about you not throwing away decimals in intermediate calculations for VAT or sales tax.


I think the point is that the integer arithmetic implementation your CPU provides is wrong in at least one jurisdiction, so (for example) the machine code in the article is wrong.


> done something grossly incompetent like used floats for those calculations

So, in another life I worked on reporting software for a foreign branch of a US bank. You've heard of the bank. You would probably recognize the CEO's name, in fact.

We had been fucking this up for years. I fixed it. We had some customers who yelled at us because our reports were "wrong" i.e. they were double checking our work and apparently making the same mistake. They could not be reasoned with. Bear in mind, we're talking about differences of pennies, or a few dollars on very large transactions. Some of our customers insisted we were calculating the values incorrectly and demanded we "fix" it.

What do you think happened next? You have one guess.


> A shocking number of people are unaware of how many decimal points of accuracy their local tax code requires to calculate vat or sales tax correctly.

I would imagine almost no-one knows this (-: What's a shocking number?


> What's a shocking number?

Almost every developer I've worked with who haven't implemented invoicing or billing at least once and had their finance team yell at them for producing wrong numbers...

(and I'll edit this to add the limitation "who implement billing related software" - it's still true, and closer to my intended point)


> A shocking number of people ... are unaware of how many decimal points of accuracy their local tax code requires to calculate

A shocking number of people who create tax codes have no idea how many decimal places they are using.

It's probably better now, but I recall having to reverse engineer the tax tables to figure out how many decimal points of accuracy were used, and what rounding rules were used, so we could match their numbers.

These numbers would change from year to year, with no change in the underlying tax codes.


I always hated tables. At least when I last had to implement UK VAT rules the rules were very precisely defined. But I've had to deal with stupid tables implementing rules they couldn't be bothered to spell out before. Yikes.


in my experience, these are often written off as error.

if the error is less than their hourly salary rate, it don't even worth mentioning.

if it worth a day or two of salary, its nice to fix but never a priority


For it to be written off as error you need to know the discrepancy, which means you need to know what it's supposed to be. When e.g. calculating the VAT or sales tax you owe the government, if the rounding deviates from legal requirements then unless it's in their favour you can be in for a bad time.


I think for most large businesses there are pretty considerable error bars here. If you say you owe 1,000,000 a year in VAT and the government says you owe 1,010,000 it's cheaper to pay the difference then dig into why it's off.


If you report 1,000,000 and certify that it's the right number, and the government audits you and find you should have paid 1,010,000, then depending on jurisdiction you might be entirely ok, or you might find you're not going to be paying just the difference, and interest, but also a fine, and bearing the cost of additional audits going forward, and that your finance director will not appreciate having to address questions aimed at figuring out whether anything criminal is involved. Repeat the mistake a few times, and the level of scrutiny will escalate.

There's a reason that in 28 years of working in software, the only thing the financial teams I've worked with have obsessed over have been whether or not we get the VAT calculations right, and the "sticker price" of the discrepancy has never been what they worry about. For calculations that does not involve getting tax amounts wrong, they often couldn't care less about much bigger discrepancies, but get tax wrong in the wrong jurisdiction and it's a lot of pain.


It's strange. In Russia, small error in VAT will get you a letter from tax service "pay us a small error voluntarily, or we will schedule an inspection". Letter will be automatically generated by ASK-NDS system (translated as Auto Check Vat).


How do they know there's an error?


Cross check with your contragents. For every bit of incoming VAT should be outgoing VAT from your supplier. And for outgoing VAT should be incoming VAT for your client and/or sale to physical customer.

If your incoming VAT are not matched with outgoing VAT from your supplier, you will be charged.

If your supplier declared VAT, but failed to pay it, you have choice: either you have to pay it or you will be inspected to proof that it was not a fake.

Every sale to physical customer in Russia should be uploaded to tax service cloud. You (as a customer) could check your receipt online or using app, and get a reward for reporting tax evasion.

This system boosted VAT revenue x1.5 in a few years.


Why would it be wrong to use floats?

That would have been my default assumption


When you're doing floating point arithmetic on a computer, it will approximate and round certain values in ways that don't match the way humans do it when they're, e.g. doing accounting.

So you need to run a massive physics simulation really fast? Yes, floats are great.

You need to calculate taxes on a massive corporation's fiscal year? Bad idea.

Some libraries advertise "arbitrary precision", many computer systems have a "decimal" type intended for currency, etc. and then they won't make all the same mistakes, but as the OP said you still need to control rounding rules and make sure they match the law.


> You need to calculate taxes on a massive corporation's fiscal year? Bad idea.

That depends on whether the hundred-billion-dollar corporation cares about being off by a dollar.

And by "off" I mean "different from how humans round", not necessarily further away from an infinite-precision calculation. In fact at "massive corporation" level I would guess that binary floating point is more accurate than a typical fractional penny system.


> That depends on whether the hundred-billion-dollar corporation cares about being off by a dollar.

How many hundred billion dollar corporations are private? Public companies would care a great deal about accounting accuracy.


Is it worse for a hundred billion corp to be off by a dollar than for a hundred million corp to be off by a third of a penny?


it's not so much how much it is off but that it's off at all. If the numbers don't add up then they don't add up. If there's any kind of difference then it has to be found and accounted for and it becomes a needle in a haystack search to account for the difference. Think about trying to find $0.05 spread across hundreds of thousands of transactions due to rounding issues.


I'm going to let you in on a secret...

Every single publicly listed company, every single one of them, is off when it comes to calculating their taxes by way more than just a dollar. And I don't mean clever accounting tricks or tax avoidance schemes, I just mean in terms of actual mistakes being made.


If they could just pay the dollar and never have to worry about it again, sure. But the point is for them to have confidence that the math is unimpeachable and identical to whatever auditor or tax official would compute at every step of the way so you don't just have to guess at correctness with some waving of hands.


Surprisingly common values like 0.1 don't have a precise representation in binary for most formats, including standard floating point number formats. See https://0.30000000000000004.com/ for more detail than you can shake a stick at.

Also if the local tax code states using 5 decimal places for intermediate values when you will introduce “errors” using formats that give greater precision as well as those that give less precision. Having worked on mortgage and pension calculations I can state that the (very) small errors seen at individual steps because of this can balloon significantly through repeated calculations.

Furthermore, the name floating point gives away the other issue. Floating point numbers are accurate to a given number of significant figures not decimal places. For large numbers any decimal places you have in the result are at best an estimate, and as above any rounding errors at each stage can compound into a much larger error by the end of a calculation.


IEEE standard floating point uses a binary mantissa.

And binary has trouble representing fractions that are common in prices:

  $ bc
  obase=2
  scale=20
  1/5
1/5 in binary is a repeating binary fraction: 0.0011001100110011...

Just as you can't express 1/3 or 1/7 precisely as a non-repeating decimal fraction, you can't express 1/5 and 1/10 as a non-repeating binary fraction. As a result, most prices involving cents in currency cannot be expressed precisely as binary floating point numbers.

edit: fixed formatting


The biggest issue is you now need programmers who know about epsilon computation and error propagation when working with incorrect numbers. Then you need to know when to fudge the visual representation of your incorrect number (and you probably also need to understand when your programming language / libraries do fudge the output for you).

FP numbers have their use but they re better reserved for scientists doing actually scientific stuff and not just to represent what are actually tiny numbers (in the grand scheme of things) and which can be represented perfectly by other means.


If there's an applicable law or regulation that says "you must do x", and you do y (and that yields different results), you'll get into trouble, even if your way yields "better" or "more accurate" results.

This is not to say that using floats and rounding correctly necessarily does yield different results, by the way (although most likely it will) – but if they do differ, you're going to have a bad time using floats.



> 0.3-0.2-0.1

-2.7755575615628914e-17

And now you overdrafted


Floating point calculations without some final rounding step before presentation/export/storage are almost always wrong, since you're implying much more precision than is justified by your source data.


Ths problem isn't rounding the final result. The problem is that the source data itself can't be accurately represented.

There is no floating point value equal to 0.3.


That’s not a problem by itself.

You can represent 0.3 as 0.300000…0004, which rounds to 0.3 again in the end.

But you need to reason about the number and nature of intermediate operations, which is tricky, since errors usually accumulate and don’t always cancel out.


> That’s not a problem by itself.

No, it really is the original sin here.

> since errors usually accumulate and don’t always cancel out.

The problem is that from the system's perspective, these aren't "errors". 0.3000000....4 is a perfectly valid value. It's just not the value that you want. But the computer doesn't know what you want.


> The problem is that from the system's perspective, these aren't "errors".

When I say "error" here I mean the mathematical term, i.e. numerical error, from error analysis, not "error" as in "an erroneous result".

There is a formalism for measuring this type of error and making sure it does not exceed your desired precision.

> It's just not the value that you want.

My point is exactly that if you're looking at 0.300000...4, you aren't done with your calculation yet. If you stop there and show that value to a user somewhere (or are blindly casting it to a decimal or arbitrary precision type), you are using IEEE 754 wrong.

You know that your input values have a precision of only one or two sub-decimal digits, in this example, so considering more than ten digits of precision of your output is wrong. You have to round!

It's the same type of error that newspapers sometimes make when they say "the damage is estimated to be on the order of $100 million (€93.819 million)".

Yes, this is often more complicated and error-prone (the human kind this time) than just using decimals or integers, and sometimes it will outright not work (since it's not precise enough – which your error analysis should tell you!)! But that doesn't mean that IEEE 754 is somehow inherently not suitable for this type of task.

As a practical example, Bitcoin was (according to at least one source) designed with floating point precision and error analysis in mind, i.e. by limiting the domain of possible values so that it fits into double-length IEEE 754 floating point values losslessly – not because it's necessarily a good idea to do Bitcoin arithmetics using floating point numbers, but to put bounds on the resulting errors if somebody does it anyway: That's applied error analysis :)


Do it a million trillion times and we're talking cents overdrafted (almost)


If a rounding error put me a million-trillionth of a cent into my overdraft, I’m pretty sure my bank would still activate that $20/mo overdraft fee :P


If you just add up the errors, sure. What is riskier is that you risk tipping values the wrong direction right before applying a rounding step, or end up with an error right before multiplying a now wrong per-unit value with some large-ish factor.

Often these things are not a big problem on their own, but then later gets compounded because someone does something stupid like passing these imprecise values around to be distorted further all over the place.

And sometimes the reason it doesn't become a legal problem turns out to be because your finance department quietly works their way around it by expending expensive manpower accounting for discrepancies that shouldn't be there in the first place, and so increases the cost to the business by many magnitudes over the loss the developers might have assumed to be the worst case (if they're aware of the discrepancy at all).

This is one of those things you can get away with many times, many places, with no ill effects. But when it finally bites you it can get expensive and/or really bad to deal with, and it's fixed by simply never doing money calculations on datatypes with imprecise arithmetic, and having a five minute conversation with your finance team about what your local rules for rounding tax amounts are.


- "or end up with an error right before multiplying a now wrong per-unit value with some large-ish factor."

Where in financial accounting do people multiply an amount of money by a multiplicand larger than order-of-unity?


In accounting, no, while preparing input to the accounting in the form of generating invoices, I've lost count (sorry) of the number of times I've seen people doing tax calculations etc. on unit prices and then multiplying by number of units ordered, and then further compounding potential issues by adding up these numbers from multiple invoice lines. None of which is usually the right thing to do, all of which you often "get away with" without causing sufficient discrepancies, and so which people often fail to catch in testing. Until you suddenly don't.


- "multiplying by number of units ordered,"

Yeah, that's one example. I wasn't imaginative enough; thanks!


Decimal systems have to round too, so that's a pretty weak dismissal.


Laws are typically written by humans, and we use base 10, not base 2. We think $0.03 is an exact number, but floats can’t represent 0.03 exactly.


I think the historical interpretation is also relevant. The systems that did accounting before digital computers used base 10, so the first computerized systems for accounting used base 10 also. This legacy extends to the point that mainframes often had (and I believe still have) special decimal floating point math instructions. There have been several ways to accomplish this BCD (binary coded decimal) where numbers are stored in base 10 using a 4 bit encoding. I believe this can be arbitrary precision, but don’t have any experience myself. Some hardware also has decimal32 and decimal64 floating point hardware, which is part of recent versions of the ieee754 spec[1]. Databases also often have a DECIMAL type for doing calculations on money values [2]. So I think it’s not just that laws say it should be a certain way, but also that it is important to maintain consistency between systems over time.

1: https://en.wikipedia.org/wiki/Decimal64_floating-point_forma... 2: https://dev.mysql.com/doc/refman/8.0/en/precision-math-decim...


Floats lose precision unexpectedly with certain fractions that are perfectly representable in decimal, and also with certain integers once you get high enough.

The standard in ad-tech (not sure about banking) is to use int64s representing either microdollars or microcents, so a max capacity of 9.3*10^13 or 10^11 dollars


floats are an imperfect representation of real numbers and as such, there are an infinite count of real numbers that cannot be accurately represented with floats (and doubles).

It gets even worse when you start doing calculations on floats/doubles.

These inaccuracies are ok for a lot of things. graphics often uses floats and the errors are small enough they don't matter.

But currency absolutely needs to be accurate, and for that reason, floats/doubles are in appropriate.


Floats are not how you do money math, if you run into anyone trying to do money math and they say they use floats, there is a 99.999999% chance they are wrong.

> Maybe you value arithmetic correctness over simplicity of procedures

Lots of places(not typically the USA) has this codified in law. For example, do a web search for: EU money rounding rules. You will find several different rounding and precision rules, depending on the context of what you are doing with the money, all from places like the Central Bank and the EU Commission.

It's mostly US developers that are clueless here, because US laws are fuzzy at best, and the general rule is, you do whatever your bank/regulatory authority does, and if they don't happen to know (and I've met several that don't), then you have to figure it out yourself.

In the USA, we use decimal.ROUND_HALF_UP, because we have seen in practice this is what our USA based banks & govt tend to do in the wild. It should be noted IEEE 754 rounding recommends using decimal.ROUND_HALF_EVEN. https://en.wikipedia.org/wiki/IEEE_754#Rounding_rules

In other places, we do whatever their laws require, or treat them like the USA and do whatever our local bank/govt authority tends to do in practice.


I agree with the parent. Almost everyone thinks floats are fine. I work in lending, and many of my coworkers, who claim to have CS degrees, do not understand floats at all.


GP isn't just saying "don't use floats", though. (And even that is only a heuristic: It's possible to get correct results using floats, but you need to be very diligent about when and how you round, so in practice it's easiest to just avoid it.)

They're saying that only arbitrary precision arithmetics are acceptable, and additionally claiming that everybody else in the world gets money arithmetics wrong.

I doubt both of these statements, and especially the assertion that there's exactly one "correct" way of doing arithmetics with money.


Arbitrary precision decimals are the best solution in most cases. I have worked at multiple companies where people represent currency with floats and then wonder why they get strange results in some cases.


I have a personal anecdote on this subject. A long time ago I worked at a bank and I had to calculate a large number of accounts regarding agricultural loans. These were state sponsored loans. When I finished my task (this was a Java job), I found that sometimes the results were off by $0.01. So I asked my boss how I should do the rounding, to which he replied that an error of up to $1 was acceptable. If I recall correctly, the amounts where in the hundreds and thousands.


On my first day in a new company, not even senior dev yet, I met with the head accountant. I asked about her top problems, she said her top problem was that the application would produce different invoice on screen, different invoice when printed as PDF and a different invoice in the accounting software. About 1% of all invoices were affected but due to amount of billing they were doing (telecommunications and advertising) they needed to have 3 FTEs just to correct the invoices.

And correcting the invoices meant playing with numbers so that at least the PDF and accounting software agreed on the total value and tax.

She also said they had at least 2 different employees and an external company look at it and not able to fix it. She also told me not to bother because she does not believe the problem can be fixed (that's what she was told).

I looked at the software, it had two separate copies of the invoice calculation (separate for on screen and for printing to PDF). And of course it would send the invoice to the accounting software which calculated the invoice in a different way still.

I ran couple of experiments to reverse engineer how the accounting software did the calculations -- the exact order of them and the exact rounding rules. Then I built a small module that captured those calculations. Then I changed all doubles to arbitrary precision.

It took two days and the problem was fixed but it took couple more days before accounting department actually believed it.


> I looked at the software, it had two separate copies of the invoice calculation (separate for on screen and for printing to PDF).

That's actually kinda normal, in some industries.

For example, an amazon marketplace seller's warehouse management system might not be tightly integrated with Amazon's basket/checkout display logic.

In some situations the results of recalculating are supposed to be different. For example, if there's a "5% off when you buy 3 widgets" offer and you check out with 2 widgets in your basket, the offer doesn't apply. But if you checked out 3 widgets, thus getting the offer, then the seller found they were low on stock and could only send you 2, you should get the 5% discount on those 2.


> It took two days and the problem was fixed but it took couple more days before accounting department actually believed it.

Those 3 people that now didn't had a job probably weren't all that happy xD


if there ever was a reason to apply DRY at all cost, this is one


> I looked at the software, it had two separate copies of the invoice calculation (separate for on screen and for printing to PDF). And of course it would send the invoice to the accounting software which calculated the invoice in a different way still.

Your setup made it sound like something crazy and inane, like "the PDF printer used on those machines changed floating point rounding modes" or what have you.


In olden times we would "print to PDF" as in have a little piece of code to format the document that would be sent to a printer which could be a PDF document.

In this particular case we had two separate pieces of code, one running on the client (for on screen presentation) and one on the backend (to create the PDF on a shared location and produce a download URL).


There was the famous "Xerox photocopier changes digits sometimes" bug, but that's not what's happening here.


The hero I want to be


It is easy to be a hero when the company is shitty.

My career advice is to work in a field / company / job where you can be somewhere in the top 10-20% of all employees. Just don't overdo it, if you are top 1% you are probably aiming too low and could be working for better paying, more rewarding field / company / job.

For a lot of my career I was working for financial institutions like banks. A lot of really badly managed projects with definitely not top level developers. Easy to be a top performer. I really like helping people and projects and it was working well for me especially when it was easy for me to provide valuable help.

I got hired once for a really good company with really top performers and suddenly I lost the status that I was so used to. I was keeping up with my work, sure, but I was no longer a shiny star. I got back to working for banks.


Exactly. I think many people start overthinking things in banking. Most accounting/finance departments are ok with rounding pennies every month.

I run a Commercial Real Estate Servicing platform, where we are accruing interest on large balances daily. Our method is to not do the rounding daily, but add up all the numbers for a given period, say a month, and then round to the penny and create a single adjustment rounding transaction along with it. Accounting departments love us for it.

If we rounded daily before storing the amount, the adjustment for accounting is usually a few pennies at least every month they have to make. Our method, it's roughly $0.01 per year with monthly periods, adjusted usually at the very end. Which on a $20MM loan, is very well within the bounds of acceptable.


I love how small details, like rounding always up for 2.50 would be significantly skewing the numbers to the higher values, so there are functions like ROUND_HALF_EVEN that would round up on even numbers, and down on uneven ones.


Yeah. I worked on a lending platform that used floats (!!!), and their response when I brought this up was that as long as the result is within something like $10, it was not an issue.

I brought up specific math problems that floats couldn't handle and they weren't phased


The real reason is because Cathrine Zeta Jones and James Bond actually did implement a program in Malaysia that collects all those rounding errors on a seperate bank account. And since it wenr global, affecting everypne, everyone thinks it ia simply normal.

Or it is because rounding errors happen and accounting is a bitch. The first option makes for a better movie plot so.


I guess I’m old because I credit that plot to Superman III. TIL it’s called Salami Slicing https://en.wikipedia.org/wiki/Salami_slicing_tactics#


Man, you and I remember Office Space differently! /s


Whenever I input my tax data on forms, it always rounded to the nearest dollar. It was strange that accuracy didn't seem to be a big priority.


I imagine over millions of returns it probably evens out. Also, for the majority of taxpayers in the US, the tax table does things like:

AGI is 40,001 to 40,025 then your tax is X dollars.

Being accurate to the penny isn't worth the trouble.


Yes, practically I can see why. But when my W2 contains cents, and my tax forms make me sign for accuracy under penalty of whatever blah blah, seems pretty odd that they don't accept my cents and then do their own internal rounding. Maybe it was just the tax software I was using.


Sounds like the start of a movie script



I'm assuming you did the correct thing and engaged an an Office Space-esque penny-stealing operation after learning this? :)


I had such case too, the solution is simple:

round in favor of the bank / financial institution you are working for


No, it does not work this way. Any good accountant will see a different in values, even smallest one, as a sign of an incorrect calculation. Does not matter which way it goes, they will feel compelled to figure out who is wrong. At least a good accountant will.


There is no accounting error; let's say customer has purchased / used a service for 1.433 USD.

You issue an invoice for 1.44 USD (aka, amount due), then the 1.44 USD is used as a basis for accounting and is all consistent.

Then, if you are a nice company and the situation applies in your case, you may issue a credit in favor of the customer for 1.44 USD - 1.433 USD that will be used as a discount on a future invoice

The best part is that the moment where you decide to issue the credit invoice or not, is the perfect moment to track the rounding errors and even keep a very detailed journal of the entries (e.g. for auditors).


You added 10 of those items as inventory with a total value of 14.33, then you sold each individually as a total value of 14.40. After that transaction, your inventory account has -0.07 on it, but there aren't any items at all there.


From experience, those differences surface during stock taking. Amd most companies are really bad at that. And if they surfacey they are corrected by inventors adjustments (in unit of measure, not value, which is a differwnt can of worms). As long as those adjustments aren't to extreme, nobody really cares.

A good accountant so will sooner or later investigate those rounding errors, as they will show up somewhere ultimately. And a general policy of rounding in one direction is the last thing you want an auditor to find.


So accountants are like number detectives doing what's essentially debugging work just like a coder would?


They design the system, detect the failures, explain and correct them. So, their work even is more like programing than your comment implies.


One can use more than 2 decimal places for rates, but final invoicing has to be two decimal places (atleast in most countries). Banks/tax authorities dont carry anything beyond 2 decimal.

Eg: fuel is usually priced 3 decimal, so 4 gallons x 25.5444 usd/gallon gives = $102.1776 to 4 dp, but will be billed as $102.18


Any rounding discrepancies simply get posted to a rounding-error account. For example, oracle's ledger will complain if there is rounding and no rounding-error account to post to. https://support.oracle.com/knowledge/Oracle%20Cloud/2411363_...


That is absolutely not how it works. Rounding is exactly specified in the underlying contract always and you need to implement the correct rounding. For example, here is the rounding table for compounding calculations in the ISDA definitions (these are very standard for a wide range of contracts, but this particular table is for various overnight swap rates used in interest rate derivatives)[1].

[1] https://globalmarkets.cib.bnpparibas/app/uploads/sites/4/202...


Interesting side note:ASME also has a standard for rounding on engineering drawings. A few years back I had to build a custom function in Excel to match the standard, because our calcs weren't matching the customer's calcs.


And how do they round ties? Does 0.00005% round to 0.0001%? Is tie breaking usually included in such contracts?


For these benchmarks, yes, that is defined.

Where issues could come in is when these things are multiplied with a bunch of other numbers (each number with defined rounding but not after each operation) and then have some defined rounding at the end. There different computer numerics could in give slightly different results, but those can easily be resolved on settlement (for small stuff that stays well within the back offices - at least that is how I remember it).

Also, not totally unusual for one or both participants to forget about some rounding they might have agreed bilaterally if it was some one off etc.


I've worked in banking.

I assure you, I would have had a million bugs filed on that before it even hit production.


Heh. But usually it's the other way around unless they remembered to specify it the right way. Forgiving $0.01*N customers is cheaper than dealing with an irate customer.


People care less about the dollar value than about reconciling. If there is some external system they should match, they really want it to match exactly.



I did join somewhere that could do it correctly, because they had some very long-running POS software. It could even do things like "split bill three ways" correctly allocating both the spare penny from the division and the tax calculation, such that you could add the bills back together again and get the same numbers as the split bill.

Using a "money" class that stores things as integer pennies gets you a long way there. "Division" and its close friend "multiply by noninteger number" are the only real problems, so you need to be careful not to provide a generic method and instead methods like divideWithLocaleTaxRounding(). You also need to check whether you're supposed to apply tax per-item or you can do it to the whole bill.

I think we had a "apply tax to whole bill but then re-distribute it to line items" method, which guaranteed that the total would be correct.

There are reasonable arguments for "integer decimal fraction of penny" as the correct unit. Digikey price some parts in 0.1 of a penny or cent, for example.

Attempting to convert things between binary "fractions" (floating point) and decimal fractions will result in misery.

I think we also had a "rational number" class for storing things like "1/3".


Integer cents is fantastic for POS software and similar things that deal with at most a few thousand dollars at a time. The place where it starts to fail is when the absolute numbers get really large, not really small. Think "United States Federal Reserve" or "UBS". Then remember that some of these institutions need to deal with accounts denominated in Zimbabwean dollars.


Really small can be an issue when it gets really small e.g. tarsnap’s accounting is pretty infamously done in attodollars (1e-12). U64 only has room for 19 decimal digits so you’re limited to 7 figures up.


Tarsnap's pricing is in picodollars (1e-12) but the accounting is attodollars (1e-18). Tarsnap uses 128-bit balances -- 64 bits of attodollars and 64 bits of whole dollars.

If I ever have someone paying me more than 2^64 dollars, I'll rewrite Tarsnap's accounting system.


Funny thing, I actually wrote one POS application (magstripe + EMV chip & pin + contactless).

EMV uses integers encoded as BCD for money. If I remember well, in most cases it is 6 bytes or 12 digits. That is more than 2^32 (most of POS machines were 32 bit ARM until relatively recently).

The terminal I worked on had 32 bit ARM. Rather than convert 12 digit numbers to 32 bit integers I decided to write my own arithmetic library that did operations directly on BCD strings of arbitrary length (but, in practice, EMV only allows 6 bytes for the amount anyway).


The current US national debt represented in integer cents requires 52 bits. It can trivially increase 4,000x before we need to worry about 64-bit balances.


Peak inflation in Argentina was 20262.80%.

So, not as much margin as one might think.


Python can deal with 1000+ digit integers and you can store them as strings in the database. Not sure about other languages.


Rounding monetary values is a complex, opinionated and business-defined operation. Sometimes it's even orthogonal to the datatype used.

Things get out of hand when you need to round multiple different things that have to sum up at the end.

For example:

- Items in an invoice are rounded and summed. (eg. $1.1234 * 5.678kg)

- Payments of an invoice can be paid in multiple installments, with interests that are also rounded (eg. 1.77% per month).

- The value paid of interest *per item* must match the total value paid of interest in all installments of all invoices in the same period.


Would IEEE decimal128 be sufficient (instead of arbitrary precision)?

"Formally introduced in IEEE 754-2008, it is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations."

https://en.wikipedia.org/wiki/Decimal128_floating-point_form...


Hypothetical solutions that do not exist are none of my concern.

Did you know different countries and different currencies have different rounding rules, for example for tax-related calculations? Does "IEEE decimal128" support this? Unless you can get all countries on our planet to agree on a single standard, any solution that does not allow specifying rounding rules is pretty much useless (unless you want to implement rounding yourself which tends to be very tricky -- I know because I attempted this couple of times).


This is not hypothetical.

Yes, of course IEEE decimal supports setting the rounding mode. The authors of the spec aren't ignorant of what's needed for financial and tax computations.

Use fe_dec_setround from ISO/IEC TR 24732, "Extension for the programming language C to support decimal floating-point arithmetic".

The modes listed at https://www.ibm.com/docs/en/zos/2.5.0?topic=functions-fe-dec... are:

  FE_DEC_DOWNWARD
    rounds towards minus infinity
  FE_DEC_TONEAREST
    rounds to nearest
  FE_DEC_TOWARDZERO
    rounds toward zero
  FE_DEC_UPWARD
    rounds toward plus infinity
  FE_DEC_TONEARESTFROMZERO
    rounds to nearest, ties away from zero
  _FE_DEC_AWAYFROMZERO
    rounds away from zero
  _FE_DEC_TONEARESTTOWARDZERO
    rounds to nearest, ties toward zero
  _FE_DEC_PREPAREFORSHORTER
    rounds to prepare for shorter precision


The authors of the spec made some provisions, and very likely all of them are useful and correct, the issue is how the programmers will use them, and in some cases there isn't even a "correct" solution that everyone uses.

A classic example in invoicing is an item that is advertised for 60.00 (to the final user) VAT 10% included.

If you try making an invoice for that sum in a few programs you will find three or four way it is implemented.

Some will have 54.54+5.45=59.99, some will have 54.54+5.46=60.00, some will have 54.55+5.46=60.01 (and possibly a "discount" of 0.01), some will have 54.545+5.45=60.00, some will have 54.54545454+5.45=60,00.


Yes, it isn't possible for the software to know your local accounting laws and practices.

My point is it's probably better to use existing, well-tested provisions than to build your own from scaled integers, to get one of those three results.

As a bonus, you might get hardware support in the future.


Yes, I understand what you are saying, I was highlighting that those (if adopted) would only fix (maybe) part of the problem, they are just (better) tools.

At the end of the day what I want (and I presume any other customer wants) is a correct invoice with the correct net, tax and total, and this will only happen when (if) the programmer understands the base issues and uses the correct library/algorithm/whatever.


This list seems to be missing something since afaik, IEEE floating point also specifies round-to-even (bankers' rounding) for round to nearest ties. Unless 'FE_DEC_TONEAREST' is that, the documentation does not say.

https://en.wikipedia.org/wiki/IEEE_754#Roundings_to_nearest

EDIT: apparently IEEE does not specify a "round-to-odd" for ties despite this having been used for banking in the UK :/

https://en.wikipedia.org/wiki/Rounding#Rounding_half_to_odd


For goods items, Danish customs require specifying 3 decimals for weights under 1kg, otherwise no decimals. Off the top of my head I don't recall exactly how they expect rounding to be done, I'd guess towards infinity.

Many duties are calculated based on net weight, and often the net weight per goods line is the result of a calculation, for example you're importing N items with a per-item weight of X. If you have a large number of goods items above 1kg but less than 10kg that has weight-based duties, the rounding mode can matter a lot.

None of the rounding modes mentioned captures this below/above 1kg split, so you have to do this in code anyway. Might as well do the rounding there too, to be sure some injected code doesn't mess up the expected rounding mode or similar[1].

[1]: https://irrlicht.sourceforge.io/forum/viewtopic.php?t=8773


Sure, you'll need to handle special cases yourself. But perhaps you don't have to handle all the cases yourself?

As I understand it, one of the new things in IEEE 754 is the idea of a "context", which stores this information. This can be global, but does not need to be. With Python's decimal module it is a thread-local variable.

If you are concerned about, say, mixing thread-local and async, you can also use context methods directly, like:

  >>> import decimal
  >>> x = decimal.Decimal("54.1234")
  >>> y = decimal.Decimal("987.340")
  >>> x+y
  Decimal('1041.4634')
  >>> decimal.getcontext()
  Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999,
  capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero,
  Overflow])
  >>>
  >>> c1 = decimal.Context(prec=4)
  >>> c1.add(x, y)
  Decimal('1041')
  >>> c2 = decimal.Context(prec=5)
  >>> c2.add(x, y)
  Decimal('1041.5')
  >>> c3 = decimal.Context(prec=5, rounding=decimal.ROUND_DOWN)
  >>> c3.add(x, y)
  Decimal('1041.4')
I don't know what the C or C++ API proposals are.


What do you mean by hypothetical solution that does not exist? This is actually specified.

IEEE does specify multiple rounding modes. Does it make more sense to use an existing spec, or roll your own numeric library with rounding modes?


I'm going through this pain right now in helping my kid with homework in rounding and discovering my internalized rules are different from what they're being taught now. EXACT same pain as yours, depending on the precision and rounding rules of your pain scale.


Less than four rounds down to zero, five and above rounds up to 10.

What has changed? Or how different were your internalised rules?


Round to the even number, which eliminates a bias toward the higher number. This is the way Python does it.


Well you did turn your hypothetical ignorance into practical one, I got calculator using it... actual physical object.


Must be a SwissMicros!


> I have never in my life joined a software project for any organisation that was able to do basic arithmetic on money correct

This observation should tell you that's it actually quite viable to be off as long as the errors are small enough.


Quite right. Either this OP spends every day writing letters to banks, shops and credit card companies complaining about the fractional cents that they have been cheated out of, probably in green ink. OR they should be able to recognize that plenty of people have good-enough solutions for this.


As someone who works in fintech, my observation is not that the caveat is "as long as the errors are small enough" but rather "as long as whoever governs the business logic is aware of the impact".

The size of the errors almost always is a large factor in their decision, but ultimately the software we write exists to serve the needs of the business, and if the business decides that larger errors are okay for some reason, then so be it.


The world runs on "good enough."


True, but the world also runs on standards (whether explicitly defined or customary), and doing things differently from everybody else makes it painful to work together.

Sometimes there's also value in doing something objectively poorly, but in a predictable and well-understood way.

Unilaterally starting to "do numbers better" sounds like a recipe for, let's say, interesting times in the finance/accounting world.


It's viable because customers generally have no recourse and companies don't care as long as the problem is in their favor.


Yeah even in university in my intro-level accounting classes they said in the real world nobody cares about discrepancies less than a dollar (and that amount scales with the size of the business). I don't know if that's actually true and I wasn't an accounting major so I don't know what they said in the more advanced classes.

But if I imagine myself as a business owner I would be annoyed with my accounting firm if they spent billable hours chasing down a discrepancy of a few pennies.


Years of practice have led me to this practical wisdom:

As long as things are consistent, no one cares if you are correct. If you lose a penny in the backend calculation, and the frontend shows the amount without the penny, and the email contains the amount without the penny, and the PDF download contains the amount without the penny, no one will care that there should be a penny there.

It becomes problematic if some places are wrong and some are right, and they are not consistent. You won't get credit for being right in only some places.

As long as the error is small enough to be inconsequential, being consistent is more important than being correct.


What are the specific challenges to writing financial software? What are common mistakes you see? What are common data structures for representing money (both the common incorrect implementations but also the correct implementations)?

Also, provided you have a data structure that can represent money, you should presumably be able to serialize that data structure and store or send it just like any other data, right? Why do you need special database or wire format support? The trivial example is to marshal it to JSON put it on disk or write it to the network using the protocol suite of your choice, right?


String (json), decimal/numeric (db) is enough to passively store amounts. Calculations and rounding going to be funny though. E.g. split $10 bill in 3 exactly the same parts, store, sum up to $10 again


Are there actually some systematic approaches to handle these cases? Or are there some libraries making this easier? We have so many places where we keep track of the offset and spread it over the items afterwards to mitigate this. It’s annoying as it’s most often a two step process and e.g. becomes even more complex when you have constrains on the numbers like applying discounts.


In this particular case, it's easier to think of the problem as "allocating $10 among 3 parties" rather than "dividing $10 among 3 parties." The latter insinuates equal distribution. Often, having the 3 parts sum back up to $10 is more important than giving an extra penny to one person and not dividing equally.

What I've used before is "Express the value in pennies. Divide by X (number of parties). Take the whole number part and give it to each person evenly. Take the modulus and distribute it one by one to each remaining party, until there is nothing left."

Example: allocate $10.01 among 3 parties:

1. You have 1001 pennies

2. Divide by 3 - give each person 333 pennies

3. Take the modulo 1001 % 3 - you have 2 pennies remaining. Time to distribute them round-robin!

4. Give one penny to person A. You have 1 penny remaining.

5. Give one penny to person B. You have no pennies remaining.

If you do this many times, randomly establish the order of the parties each time you round-robin them. In that case, no one will be systematically under-allocated, just because their name starts with the letter Z.

This is not a universal truth, and each situation is different. It may not apply to other kinds of money dividing situations.


I never said you need special database or wire format. Maybe if you are using JSON, transfer monetary values as strings rather than decimal or float data types. Some databases can actually handle money, some don't. Those that do not, usually require you to store money as strings to not lose information. When you get warned by your DBA that the database can't do arithmetic on strings, tell them that "thank you, it could not do correct arithmetic anyway".


Just use integers, and fields ending with "_cents". There is no ambiguity, rounding problems are not even an issue, serialization and deserialization will pose no problem when dealing with external actors, and you can still sum money easily in your db requests.

Problems will arise when you'll do multiplicative operations on money, for instance when working out taxes. There are precise rounding rules to apply, and the solution is to tackle these issue one abstraction layer above, on the operations rather than the values, because some time you'll want to carry out rounding between each tax operation, sometimes at the end, on one line or on a whole batch of transactions.

Other problems you'll bump into is the asynchronous nature of money flows. You won't realize it with credit cards (well you'll figure it out soon enough when you'll stumble upon "race conditions"), but this becomes explicit when dealing with mandates, direct bank transfers or checks. You need to move the money out of sight of the user in a ledger specific to that person that holds transactions being processed (can takes days or weeks in some case) and move it back to the original ledger if the transaction fail. Otherwise you'll bump into issues of double spending. This is something that is out of your control sometimes (had the unfortunate experience to issue withdrawals multiple times on a big bank's payment processor API and theses dunces sent the money multiples times). Use idempotency keys profusely as well as (distributed) locks. The hardest part is not getting your part right, it's handling external actors bad implementations. Also fuck HTTP w/ hooks. Some actors do not even make sure you received the webhook. The non binary aspect of HTTP is a bullshit argument and I'd gladly trade it for a binary protocol that has strong quality of service mike MQTT, since anyway I'll have to implement some kind of smart broker when issuing orders to a shitty HTTP api anyway.


You said something to the effect of “it’s not enough to have financial libraries, you also need storage (e.g., db) and transfer (e.g., wire formats)” which suggests that you can’t just use a standard library to dump JSON to a file or through an HTTP connection.

But yes, I can see how databases not having a money type with corresponding routines is going to make life harder.


If your database has a MONEY or CURRENCY data type, use that.

Just as you should use proper DATE or DATETIME data types for time and not roll your own with strings, integers, seconds-from-epoch, or any other schemes at least if you want to keep your sanity.


> If your database has a MONEY or CURRENCY data type, use that.

But hopefully only after understanding how it treats these, and whether that's compatible with your requirements.

> Just as you should use proper DATE or DATETIME data types for time and not roll your own

I'm always happy to use the database's date/time format – if it's actually implemented in a sane way and is compatible with my data.

For example, I work with data provided by external partners that specifies dates:

Sometimes they're only specifying the year as a single-digit integer (and you have to guess which one they mean, based on the current date and the hope that the files they send you are not older than 5-10 years). Sometimes there is no year at all. Sometimes the timestamps have an implied timezone, sometimes they're UTC, and sometimes they're supposed to be UTC, but really are in some unspecified local timezone.

In these cases, it can indeed be better to store these as strings and deferring interpretation until you actually process them.


What would be the problem with TigerBeetle's approach to use a smaller and configurable unit of measure so that you only have to deal with integers?


The article and the comment you're responding to aren't even talking about the same thing, so it's futile to discuss the pros and cons.

The article is talking about serialized representations, i.e. how you store amounts in a database. The comment is talking about how to arrive at amounts as part of arithmetic calculations, e.g. determining interest, percentage fees etc.


Many decades ago I worked in a Java team who they were using some money as a double and some as a BigDecimal. When I asked why, ... they said that at the start they didn't think they'd have any issue with doubles. Years later and still had lots of tech debt because manually rounding doubles to BigDecimal.


As much as SAP has problems, this is a solved problem there. In my experiences sometimes this is implemented incorrectly, but discovered very soon, as it's about money. I encountered this when people made some simple calculations on the frontend-side then submitting to the ERP system, where it was discovered because of the emerging inconsistencies.


For very simple billing, arbitary precision sounds like overkill, as do rounding, and order of operations.


Oh, sure, because when you are a small company you don't care that people get correct invoices.

I can certainly sympathise with this stance. There is about a billion things you can do better but you have limited time to do anything so you have to prioritise. And if one invoice in ten thousand is incorrect by one cent, and only one client in ten thousand who received the wrong invoice will actually find it out, then it is hard to argue you should be spending time on fixing this one problem.

Just don't say you can do accounting correctly on floats and we will remain friends.


You can make the same arguments against fixed precision decimal types. My systems represent currencies to 4 decimal places. At that level of precision, rounding/order of operations errors could accumulate much faster than with a 64 bit float.

Decimals are still the way to go, you just have to pick a level of precision acceptable for your application.

My management definitely does not want me spending my time chasing errors over fractions of a pennies. The only time those errors are discovered is when I compare the output of new code against old code.


Let me guess, the last 10 times you had to move jobs it was because of a difference in opinion with your boss about the importance of correcting one-cent-errors in one invoice out of every ten thousand?


Haha... no. But I may be focusing way more towards the reliability than 99.99% or so developers.

The way I solve this problem isn't by constantly hopping projects. I try to find projects that actually require extreme reliability so that I can be doing what I want in an environment where there is a business case for it.


For simple accounting I've always used integers and done all operations in cents, only converting on the frontend. what's my downside here? I guess it wouldn't support unit prices less than a penny


If you have different currencies you need to keep track of the number of decimals used, e.g. YEN has 0 decimals, bitcoin has 6, etc. It could even change over time like Icelandic ISK did in 2007. If you have different services with different knowledge about this you're in big trouble. Also prices can have an arbitrary number of decimals up until you round it to an actual monetary amount. And if you have enough decimals, the integer solution might not have enough bits anymore, so make sure you use bigints (also when JSON parsing in javascript).

Example in js: Number(9999999.999999999).toString() // => 9999999.999999998

And make sure you're not rounding using Math.round

Math.round(-1.5) // => -1

or toFixed

(2090.5 * 8.61).toFixed(2) // => 17999.20 should have been 17999.21 8.165.toFixed(2) // => 8.16 should be 8.17

The better solution is to use arbitrary precision decimals, and transport them as strings. Store them as arbitrary precision decimals in the database when possible.


Also many types of operations could give you the wrong result from incorrect rounding. E.g. let's say you're calculating 10% of $1.01 ten times and adding the result together. The correct result is $1.01, but with your method you will get $1.00.


The correct answer will depend on the specifics of your environment. In some places, tax is calculated per line item. If you go to a dollar store and buy 10 items with 7.3% sales tax, it adds up without those 0.3¢ bits. In other places, the tax is supposed to be calculated on the total for the tax category in the sale. If you wanted to keep it by line item you'd need the extra digits of precision.


Well, yes, which is why you need to be in control of your rounding and not just let the width of data type you chose for the implementation dictate that.


I enjoyed Mark Dominus's blog post [0] about the billing system he cowrote, moonpig. It restates much of the other responses, namely that ignoring infinitesimal errors/rounding would have instilled a culture of - at minimum - doubt. Perhaps another way to see this is to look at a visualization [1] of the discontinuous coverage that floating point gives to the numbers we want to represent.

[0] https://blog.plover.com//prog/Moonpig.html#fp-sucks

[1] https://observablehq.com/@rreusser/half-precision-floating-p...


Having an arbitrary precision library configured to use cents is not enough.

Over a full year it needs to keep track of all the rounding it does when it pays you interest and when that rounding reaches a penny it's supposed to pay you that penny.


"For accounting you should only ever use arbitrary precision math library"

I'm a bit surprised by this advice. I thought the common wisdom was to use a decimal type like BigDecimal in Java.


BigDecimal is an arbitrary precision type so using BigDecimal is how you would follow said advice in Java.


BitInteger and BigDecimal are arbitrary precision types in Java


nobody cares about precision in financial calculations except for nerds. particularly areas like reporting tend to have significant errors, and not just sigma.

this is now a fintech anecdotes thread, but my first ever fintech job was part of a two man special priviledges team directly under a director of X at one of the sifis. we were supposed to cut across multiple departments and through red tape with the main goal if eliminating significant overhead in certain processes (we brought some multi-day multi-step calculations down to 8 minutes in one instance, 2 minutes in another, kept us on a vendor list for a very very very long time). i didn't know how things are done, so i used bigdecimal throughout, with a lot of precision and explicit rounding rules. i did back of napkin numerical analysis for at least inner loop code. we duplicated work done by other departments (!!!), things like instrument pricing, because we couldn't wait for their batch jobs to complete. at the end we were getting from slightly to wildly different results. it took a lot of conversations with business to realize that 1) our calculations were correct 2) for them to realize what the sources of errors were and where they were coming from 3) for everyone to just kind of go eeeehh not a big deal.

i wrote a translator for a subset of j/k to java bytecode using java asm, i was pretty proud of that system, because it allowed to express pricing rules in a what i thought was much more readable way, dynamic reload without restart, but man i am very very sorry for the developers who inherited that system.


Serialization as a byte string ought be good enough if the storage layer doesn't support arbitrary precision natively.


> arbitrary precision math library with ability to specify rounding rules

I got you:

https://godocs.io/math/big#example-RoundingMode


In my experience most financial firms just use binary floats/doubles

Fixed point decimal if you're lucky (or unlucky, since fixed point sucks)

Arbitrary precision decimal floating point essentially never used


> For accounting you should only ever use arbitrary precision math library with ability to specify rounding rules

Do these rounding rules need to vary by jurisdiction?


Absolutely. IIRC most tax codes have specific rounding rules. Also e.g. some countries have done away with pennies or penny equivalents and have rules about how you are supposed to handle that, etc.


Not sure about that. In Germany, especially SMEs, no one cares about cents. Your tax reports are done in rounded Euros anyway. People, companies, and the taxation and financial state dept. are well aware of rounding issues, different ways to round that no one really cares about cents.

Besides that, using BigDecimal with two decimal places is sufficient in the java world imho. Depends on your use case. I'm entirely sceptical of people claiming general things. Depends on the requirements I'd say.


Only final values are rounded to full Euros (down, if I remember correctly).

If a large taxpayer starts rounding down as part of intermediate calculations of their tax liability, I think they'd get some questions.

But yes, rounding does happen a lot – what's important is that everybody uses the same, transparent rules for that, or it becomes impossible to double-check somebody's books, tax declaration, invoice etc.


Well, how much money can you save by efficient rounding? Surely not that much that anyone would bother. There's a thing called "Kaufmännisches Runden", which is kind of cheating as well.

> But yes, rounding does happen a lot – what's important is that everybody uses the same, transparent rules for that, or it becomes impossible to double-check somebody's books, tax declaration, invoice etc.

They don't, that's why it doesn't matter that much ;)


> Well, how much money can you save by efficient rounding?

You and me? Probably a few cents.

A bank or a large corporation selling things billed in sub-cent amounts? Single-digit percentages of their gross revenue, i.e. many millions.

Just as a very simple example: I'm your bank/phone provider/..., and I'm charging you a flat fee of 1.9 cents per transaction/call/... You're my customer and make a billion of such transactions per year.

Option 1: 1000000000 * 0.019 = 19000, you owe me $19000000.

Option 2: There are no fractional cents, so let's just round up each individual billing event. 1000000000 * 0.02 = 20000000, you owe me $20000000. Cool, a free extra million for the company!

This is why these things are precisely regulated when it comes to sales tax/VAT, for example.

> There's a thing called "Kaufmännisches Runden", which is kind of cheating as well.

Always depends on which side you're on. If you're getting a refund, it can work in your favor! Importantly, it's a precisely defined rule so that it's not possible to cheat in the implementation.


Same in the Netherlands. There is an official rule that you don't have to do mathematical rounding to get to whole euro's, you can do it in whichever direction is more favourable to you. That rule is probably there just to save the tax services some extra work and IT costs.


they'd probably care if you're compounding interest daily on that rounded up euro




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: