Oh yes, sorry I meant to write 3 * O(n) which though doesn't change the order is still three times the operations. The example I was remembering was doing filters 'inside' maps.
So... O(n)? Leaving aside the fact that "3 * O(n)" is nonsensical and not defined, recall f(x) is O(g(x)) if there exists some real c such that f(x) is bounded above by cg(x) (for all but finitely many x). Maybe you can say that g(x) = 3n, in which case any f(x) that is O(3n) is really just O(n), because we have some c such that f(x) < c(3n) and so with d = 3c we have f(x) < dn.
It's not the lower-order terms or constant factors we care about, but the relative rate of growth of space or time usage between algorithms of, for example, linear vs. logarithmic complexity, where the difference in the highest order dominates any other lower order terms or differences.
What annoys me greatly is people imprecisely using language, terminology, and/or other constructs with very clearly defined meanings without realizing the semantic implications of their sloppily arranged ideas, still thinking they've done the "smarter" thing by throwing out some big-O notation. Asymptotic analysis and big-O is about comparing relative rates of growth at the extremes. If you're talking about operations or CPU or wall clock time, use those measures instead; but in those cases you would actually need to take an empirical measurement of emitted instruction count or CPU usage to prove that there is indeed a threefold increase of something, since you can't easily reason about compiler output or process scheduling decisions & current CPU load a priori.
I do understand 3 * O(n) is just O(n), thanks. I was just clarifying my initial typo. However, it's still three/four times the iterations needed - and that matters in performance critical code. One is terminology, and the other is practical difference in code execution time that matters more, and thus needs to be understood better. You might not 'care about constant factors' but they do actually affect performance :).
> Sorry but this kind of theoretical reasoning wouldn't move a needle if I'm reviewing your PR.
If this were a PR review situation I would ask for a callgrind profile or timings or some other measurement of performance. You don't know how your code will be optimized down by the compiler or where the hotspots even are without taking a measurement. Theoretical arguments, especially ones based on handwavey applications of big-O, aren't sufficient for optimization which is ultimately an empirical activity; it's hard to actually gauge the performance of a piece of code through mere inspection, and so actual empirical measurements are required.
I recall looking at New Relic reports of slow transactions that suffered from stacked n+1 query problems because the ORM was obscuring what was actually going on beneath the hood at a lower level of abstraction (SQL).
My point is it's often difficult to just visually inspect a piece of code and know exactly what is happening. In the above case it was the instrumentation and empirical measurements of performance that flagged a problem, not some a priori theoretical analysis of what one thought was happening.