Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm fairly confident that since the lookup table fits comfortably in L1 cache, both algorithms will be about equally fast. You may see a difference if you have to case-fold the entire Library of Congress several times per user operation. The other case where there may be meaningful difference in performance would be embedded devices with small caches and slow memory.


Indirect memory lookups are slow, even if the data is in the L1 cache.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: