Hacker News new | past | comments | ask | show | jobs | submit login
The Nitty Gritty of In-Memory Computing (theplatform.net)
48 points by nkurz on Sept 7, 2015 | hide | past | favorite | 1 comment



Compressed representation isn't a trade-off but a win-win very often. When you do in-memory data structures like graphs or automata, you can perform searches times faster if you go for packed representations. What first comes in mind are language-optimised collations http://ow.ly/RVxwa, lossless codes https://en.wikipedia.org/wiki/Entropy_encoding, https://en.wikipedia.org/wiki/Delta_encoding, lightweight LZ https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Wel.... One still has to decompress data chunks (pages, vertices and alike) to act locally while traversing, though cache lines utilisation is improved by more data loaded to process at once, resulting in better speeds. And you can go even faster if you can traverse compressed representations directly without unpacking, which is sometimes possible. General receipt: always shrink data to go faster, and always verify result in real scenarios.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: