Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I couldn't quite connect all the dots as to why this is better. Something I do understand is summing numbers close to zero before numbers of larger magnitude away from zero produces a more accurate sum because floating-point addition isn't truly associative.

My best guess is that this reverse-z convention keeps numbers at the same scale more often. I think it's important about the same scale rather than near zero because the relative precision is the same at every scale (single precision float: 24 bits stored in 23 bits with implied leading 1 bit). If the article is trying to say that numbers near zero have more relative precision because of denormalized representation of FP numbers it should call that out explicitly. Also the advantage of similar scales is for addition/subtraction there should be no advantage for multiplication/division AFAIK.




This article is very good


This is the key paragraph: "The reason for this bunching up of values near 1.0 is down to the non linear perspective divide..."


That makes sense: many values close to 1.0 then transforming to represent that as 0.0 provides additional precision.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: