Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did you even read the article?

> In Mono, decades ago, we made the mistake of performing all 32-bit float computations as 64-bit floats while still storing the data in 32-bit locations. (...) Applications did pay a heavier price for the extra computation time, but [in the 2003 era] Mono was mostly used for Linux desktop application, serving HTTP pages and some server processes, so floating point performance was never an issue we faced day to day. (...) Nowadays, Games, 3D applications image processing, VR, AR and machine learning have made floating point operations a more common data type in modern applications. When it rains, it pours, and this is no exception. Floats are no longer your friendly data type that you sprinkle in a few places in your code, here and there. They come in an avalanche and there is no place to hide. There are so many of them, and they won’t stop coming at you.

The raytracer is just a good performance test.



> The raytracer is just a good performance test.

The article does say that "it was a real application", which is a bit of a stretch.


Early C compilers made that same mistake.


It's the x86 hardware which made the original mistake. Using 80-bit floats in the x87 FPU turned out to be a bad idea. Thankfully, standardizing SSE and SSE2 in x86-64 gave us a way out of that mess.


Yes I did and fwiw, comments asking people whether they read the article or not is ot on hn. A performance test is exactly the same thing as a benchmark. In any real world code, slower floats doesn't matter at all. None of you who have commented have been able to or even tried to prove me wrong on that point.


> In any real world code, slower floats doesn't matter at all. None of you who have commented have been able to or even tried to prove me wrong on that point.

First of all, this is Burden of Proof fallacy: the onus is on you to prove this statement right, not on us to prove you wrong.

Second of all, nobody has been trying to prove you wrong because you did not actually say that floating point performance does not matter in real world code. You may have had it in mind, but you cannot blame others for not picking up on something you did not communicate in the first place.

What you did say was "correctness > speed", which is not the same thing. Furthermore, while this statement is true it needs a context to be applied to, which you have to give. Without further justification by you why using float32 operations for float32 data types would reduce correctness, it is a hollow truism.


> In any real world code, slower floats doesn't matter at all

This is the same reasoning that tanked the Cyrix 6x86.

If it were true, why don't we just ditch hardware floating point altogether and just emulate it with integer arithmetic instead? I'm sure chip manufacturers would appreciate having the die-space back.


That's a straw man. I explained that I meant that a slower builtin floating point TYPE doesn't matter. "If you really care about cpu raytracing performance, you need to write handcrafted simd code and C# default float handling is of no consequence to you."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: