Hacker News new | past | comments | ask | show | jobs | submit login

Sure. Makes sense. The question really is about whether or not some of these applications can tolerate errors in the order of 1%. I contend that those application are few and far between, particularly in this day and age. People don't want to go backwards.

For example, I can do photorealistic rendering of mechanical models from Solidworks. A 1% error in the output of these calculations would not be acceptable. The quality of the images would not be the same. Play with this in Photoshop and see what 1 1% error means to images. I would have zero interest in a graphics card that ran faster, used less power but offered a 1% error in calculations. That's an instant deal breaker.

I can see the competitors' marketing: "Buy the other brand if you want 1% errors".

Does a radiologist care about the processing of an MRI happening faster? Sure. Can he/she tolerate a 1% error in the resulting images? Probably not. He probably wants more detail and resolution rather than less. I wouldn't want him/her to be evaluating MRI's with a baked-in 1% error. I think it is ridiculous to think that this sort of thing is actually useful outside of a very narrow field within which errors would be OK.

I'm still waiting for someone to rattle-off 100 to 1,000 applications where this would be actually acceptable. Without that it is nothing more than a fun lab experiment with lots of funding to burn but no real-world utility.

Let's check in ten years and see where it's gone.




Sure. Makes sense. The question really is about whether or not some of these applications can tolerate errors in the order of 1%.

I think the idea here is threefold:

1. The vast majority of computations do not need extremely high precision. A billion times more floating-point operations go into rendering video for games every day than for MRI processing (made-up statistic). Even now, we have double-precision floating point for this reason: most operations only need single-precision.

2. Applications that need a particular level of precision can be written with that level of precision in mind. If you know the ability of a particular operation to provide precision, and you are aware of the numerical properties of your algorithms, you can write faster code that is nevertheless still as accurate as necessary. Most of this isn't done by normal programmers, but rather by numerical libraries and such.

3. Many performance-intensive applications already use 16-bit floating point, even though it has very little native support in modern CPUs. AAC Main Profile even mandates its use in the specification. The world is not new to lower-precision floating point, and it has gained adoption despite its lack of hardware support on the most popular CPUs.

The entire idea that this is "let's make ordinary computations less accurate" is a complete straw man and red herring that nobody actually suggested.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: