Well, the question is what you want to optimize for.
It's a library function that exists for consistency's sake; it's clear and simple, and reasonably fast (and near optimal in some cases).
For optimal performance but suboptimal clarity, they could use a runtime implementation, but ideally the clear code would be fast, so best not to
compromise the clarity for performance unless it proves important.
I've been following Go content for years, this is the first I can recall hearing of it, so I'm mostly fascinated that people care, given that this seems to be doing what the most classic optimization advice suggests and not messing it up prematurely.
The most classic optimization advice is based on a complete misquote. Here is some critical context:
> t. The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I
believe this is simply an overreaction to the abuses they see being practiced by pennywise-and-pound-foolish programmers
> in established engineering disciplines a 12 % improvement, easily obtained, is never considered marginal
> when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such
efficiencies.
> We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
Keep in mind that in this case the argument is being made against goto, which is effectively an inlined `jmp` instruction that can do all sorts of insane things just to save a few instructions. This quote is discouraging a case where the complexity is extreme and the benefit is minor.
All that people can seem to remember is "premature optimization is the root of all evil".
“Avoid premature optimization” means to do the cleanest/clearest thing now, and only optimize later when you have the full picture. Arguably, that advice doesn’t apply to standard library functions anyway, at least not to the same extent. But if it did: How is adding a new, “bad” implementation any clearer than simply calling the already existing optimized one that the GP linked to?
You should optimize when it’s net useful, not just because you know how. Making a random function faster and harder to maintain is bad engineering if the speed isn’t useful.
There’s no such thing as premature optimization in a language standard library. Every function should be optimized to the bitter end because everybody uses the standard library.
For optimal performance but suboptimal clarity, they could use a runtime implementation, but ideally the clear code would be fast, so best not to compromise the clarity for performance unless it proves important.
I've been following Go content for years, this is the first I can recall hearing of it, so I'm mostly fascinated that people care, given that this seems to be doing what the most classic optimization advice suggests and not messing it up prematurely.