Hacker News new | past | comments | ask | show | jobs | submit login

> I find using decimals places (like 16.1% and 21.1%) in human experiments pretty irritating. It feels like false precision.

Whether you say 21.1%, 21%, or 20%, you still have a single number. You could make an argument that decimal places like 21.147258% add clutter, but without an actual measure of uncertainty, all you're doing is reporting a summary of the data in the sample with different amounts of arbitrary rounding. That's not particularly helpful as a substitute for the full distribution.




It's not just a number. It's a string of characters conveying information about a measured value. The way it's specified conveys information about the number of significant figures (https://en.wikipedia.org/wiki/Significant_figures).


Significant figures are not a universal standard, they are an imperfect way to in-band information about confidence, by sacrificing precision.


21.1% is implying a different confidence interval than 21%. At least to me.


Yup, 21% and 21.0% do implicitly convey different information even though they’re the same number.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: