Hacker News new | past | comments | ask | show | jobs | submit login

The other issue will be the I/O bottleneck. A practical application would need to have on-chip memory to buffer data (not mentioned here, I think). Even so, the applications will be severely limited by having data with enough processing needs.

I spend most of my time working on FPGAs, and I would imagine that in the end, a practical implementation of this chip would involve a good deal of similar work. Data would have to be piped from one core to adjacent cores, to keep the inter-chip bandwidth tractable. This could result in portions on the edges left unused because the data can't get there.

In summary, the applications for such a chip are limited, and will require a different skillset from what we usually ascribe to a programmer, but in those constraints these chips could be incredibly high-performance.

EDIT: I haven't played with log-scale arithmetic yet, but I think that adds / subs will be much more computationally complex--probably more so than multiplies in the current linear-scale numbers. Just a thought.




I don't think addition and subtraction are too bad in this model. The slides mention that there is a simple circuit computing F(t):=log(1+2^t). So if you have two numbers x and y, represented by their logarithms log(x), log(y), the log of their sum can be computed using F as follows: log(x+y) = log(x) + log(1 + y/x) = log(x) + F(log(y) - log(x)).

Whether this is better or worse than multiplies in the current linear-scale representation depends on how hard it is to compute F, I guess.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: