Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's not “worse”, that's the entire point of using O() notation! The beauty of O() notation is that it lets us carry out, fully rigorously, computations like

(n + O(√n))(n + O(log n))^2 = (n + O(√n)(n^2 + O(n)) = n^3 + O(n^(5/2))

without dealing with a mess of sets and quantifiers. Please take a look at some works where asymptotic expressions are dealt with proficiently; you'll understand. (de Bruijn's book https://news.ycombinator.com/item?id=16834297 for example, or at least Chapter 9 of Concrete Mathematics.) I recently worked out an example here: https://cs.stackexchange.com/a/88562/891 — replacing all O() equations with “∈” and “⊆”, though it can be done ( https://math.stackexchange.com/a/86096/205 ), is just cumbersome and only distracts from what's going on.



I think you lost a factor log n there. Pretty sure (n + O(log n))^2 is (n^2 + O(n log n)).

In any case your notation works perfectly fine as an equality of sets. It's not that unusual to add and multiply sets together, there's really only one sensible definition, and the slight abuse of notation to use x instead of {x} is perfectly acceptable if you're careful.

The problem is that f = h + O(g) is generally used in a way that's false as an equality of sets, it's usually used to mean f ∈ h + O(g), which just makes using '=' needlessly sloppy notation (especially since you can make it correct by just changing a single character, no mess or quantifiers required).

Still, that's nothing compared to the suggestions on stack overflow which suggest using = to mean ⊆. Which is a terrible idea, equivalent to using = instead of ≤. In fact you point out one of the reasons it's terrible yourself, when you note that all equality signs only work one way, going against all mathematical convention.

Even stronger, ⊆ has some very nice properties. It's a total order on the big O sets, which is one of the nicer properties you can have (it's merely a partial order on the f + O(g) sets, but you can't have everything). Why on earth would you sacrifice all that just so you can avoid a scary math symbol?


(Thanks for the correction, and the polite response.)

The purpose of notation is to communicate effectively, and ideally notation should match the thoughts that the writer has and wishes the reader to have. That is, notation should match thought, rather than humans change their thinking to match notation.

When a mathematician (who works often with asymptotics, say in analytic number theory) writes something like O(n), they are not thinking of a set; they are thinking of “some unspecified quantity that is at most a constant times n”. (As de Bruijn illustrates with his L() example: https://shreevatsa.wordpress.com/2014/03/13/big-o-notation-a...) So for example, the sentence

(n + O(log n))^2 = n^2 + O(n log n)

is thought of (by the person writing it) as something like

“when you take n and add a quantity that is at most a constant times log n, and square it, you get n^2 plus at most a constant times n log n”

and not as something like

“the set of functions obtainable by adding the function n ↦ n to a member of the set of functions that map n to at most a constant times log n, and squaring the sum, is a subset of the set of functions obtainable by adding the function n ↦ n^2 to a member of the set of the functions that map n to at most a constant times n log n”,

even if the latter is the fully formal and precise way of articulating it. (Consider “the sky is blue” versus “the sky is a member of the set of all blue things”, where “the set of all blue things” was never a part of the original speaker's thoughts.)

That's one reason for preferring the equals sign.

I think what's happening is that we lack a good theory or notation for talking about unspecified things the way we think of them, and set theory (and associated notation) is the closest thing that anyone's bother to develop. (An unspecified thing is “just” a member of some set. And when someone wants to be fully formal, that works fine enough and in fact the differences disappear. As Thurston says (https://shreevatsa.wordpress.com/2016/03/26/multiple-ways-of...): “Unless great efforts are made to maintain the tone and flavor of the original human insights, the differences start to evaporate as soon as the mental concepts are translated into precise, formal and explicit definitions.”) Using the equals sign here captures the original thought better than ∈ or ⊆.

What's so terrible about a one-way = sign anyway? The problem can't be that it goes “against all mathematical convention”, because using it that way is the mathematical convention for over a century, ever since soon after Bachmann introduced it in 1894. It seems that as people learn more mathematics, they learn to accept a great many things and extensions to notation, but the thought of an asymmetric = sign, used like “is”, is just too much to bear for some people. But everyone who works with asymptotics enough does get used to it and comes to appreciate it, so it can't be that either...


I have no problem with using notation in a more 'intuitive' way than the technical definitions allow. However I won't ever acknowledge such notation as correct. If you want to be precise you'll need to use precise notation, and insisting that abusing the '=' sign is precise will do more harm than good. Unless you consider 1+1=3 to be good notation.

If you seriously think you're not missing out by removing the distinction between ⊆ and =, consider that O and subset relations are enough to denote all relations that you'd normally use the different big Os for that nobody ever bothers to remember.

So f being of order o(g) is equivalent to O(f) being a proper subset of O(g) (O(f)⊊O(g)), f being of order Theta(g) is simply equivalent to O(f)=O(g), and f being of order Omega(g) is simply O(f)⊋(g). You might need to restrict yourself to nonnegative monotonically increasing functions, but that's pretty much done in practice anyway. And this might differ from some of the existing definitions that are being used, but those do vary a bit across sources, and these definitions are as sensible as any, and are compatible with the natural partial order on sets of the form O(f) given by inclusion.

Seriously though there's nothing you'll gain from throwing away a nice inequality relation just because it saves you from writing ⊆.

I'm also not sure what you have against thinking of something as sets, but you can't be precise in mathematics and insist you're not using sets, at least not without going through a lot more trouble then you'd ever avoid that way. In particular it's fine to think of:

>“when you take n and add a quantity that is at most a constant times log n, and square it, you get n^2 plus at most a constant times n log n”

and

>“the set of functions obtainable by adding the function n ↦ n to a member of the set of functions that map n to at most a constant times log n, and squaring the sum, is a subset of the set of functions obtainable by adding the function n ↦ n^2 to a member of the set of the functions that map n to at most a constant times n log n”,

as being the same thing. One is just written in slightly more obnoxious math speak. However if you insist that the meanings are different then something is wrong, because that would imply that the most straightforward mathematical interpretation is apparently different from what you meant.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: