Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did you mean "3.6 Benchmarks: CHAMP versus Clojure’s and Scala’s HAMTs"?

"Speedups Compared to Clojure’s Maps: In every runtime measurement CHAMP is better than Clojure. CHAMP improves by a median 72 % for Lookup, 24 % for Insert, and 32 % for Delete. At iteration and equality checking, CHAMP significantly outperforms Clojure. Iteration (Key) improves by a median 83 %, and Iteration (Entry) by 73 %. Further, CHAMP improves on Equality (Distinct) by a median 96 %, and scores several magnitudes better at Equality (Derived). Speedups Compared to Clojure’s Sets: The speedups of CHAMP for sets are similar to maps across the board, with exception of insertion and deletion where it scores even better."

Interesting indeed!



Nice results, but in the old days, algorithms improved on other algorithms in the big-O sense.


If asymptotic performance was everything, we would all be using Fibonacci heaps.

In reality other things matter.


A lot more low-hanging fruit in the old days.


Right, which is why nobody ever used qsort. Or, wait. It's been widely used for a long time...


Well, I didn't say "worst case".

If qsort only provided a constant ratio time improvement over bubble sort in the average case, then it wouldn't have been so popular.


Actually, big-O complexity by definition is determined based on the worst case. https://stackoverflow.com/questions/3230122/big-oh-vs-big-th...


Sure, you can point to algorithms that nobody experienced ever uses. But if you e.g. compare qsort usage with heapsort usage, people use qsort because of it's better constant factors, even though the worst case complexity is worse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: