Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Really bizarre. It seems like it wouldn't have been much more work to just implement it properly. Instead people are supposed to wait until the compiler magically gets smart enough to optimize the pattern... but the pattern and method are both intentionally slow, so there will never be usage pressure to optimize it.

A reasonable compromise would be to just implement a single pass three-way compare in native Go instead of optimized assembler, and then if users keep requesting it be optimized, at that point make the compiler improvements or write the hand-tuned assembler version.

Otherwise what you're going to get is people using messy workarounds to do a 3-way compare, like doing a byte-wise compare that isn't unicode-correct. Blech.



It was previously implemented as a special internal function https://github.com/golang/go/commit/fd4dc91a96518fdbb47781f9...

So, making it a simple Go function was more work (at least, as an individual change) because they could've left it.

A three-way compare in native Go would likely be slower in most cases than the "slow" version that exists there, because in actually equal or size varying cases the "slow" one gets sent directly to optimized platform-tuned assembly, and the other cases still end up with tuned multibyte comparison stuff that likely wouldn't be possible in the stock Go compiler without clever bounds check removal. A compiler that can make that three-way quite fast is desirable, and there are reasons to do that independent of that function, but even Rust uses unsafe and farms out to builtin tuned memcmp stuff for 3 way compare of strings.

My theory is that strings.Compare _was_ known to be faster, and people starting preferring it because it was faster, and that in part prompted the change. Most engineers use a faster approach if available, even if it is a bit clunkier and not necessary (as this comment section shows, many folks are outraged at the idea of code not optimized for maximal performance). Encouraging bad use because a function is unintentionally faster than the naive thing is a bug in a stdlib.


I added the original fast implementation in this CL https://go.dev/cl/2828 because I found it useful, clear, and efficient in tuple "less" comparisons like this:

   if cmp := strings.Compare(x.first, y.first); cmp != 0 {
       return cmp < 0
   }
   if cmp := strings.Compare(x.second, y.second); cmp != 0 {
       return cmp < 0
   }
   ...
Compared to the != then < approach, it makes only a single pass over the string data. To this day I never understood the justification for intentionally making it slower, or why the style of code above isn't reasonable.


Any sort that doesn’t take into account Unicode collation isn’t Unicode correct.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: