Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Alternative notation for exponents, logs and roots? (2011) (math.stackexchange.com)
177 points by aleyan on Dec 2, 2020 | hide | past | favorite | 65 comments


I think it's worth linking to the 'Notation as a tool of thought' thread from the other day, as it gave me quite a lot to read, watch and think about [0].

It also reminds me of Graphical Linear Algebra [1] which I occasionally see mentioned here. And, as included in my comment in that thread, the notion of using Tau as the circle constant in equations [2].

Notation is a weird topic to tackle. Like with new technologies or languages on HN, there seem to be those who get [a new notation when it is proposed] and evangelise it, and those who see it as pointless and vocally dismiss it. Posts like the article where you're weighing up and exploring benefits and limitations of notation seem rare - and even those that do exist seem to be pitching for their new notation to be a global replacement rather than as a pedagogical or epistemological tool.

[0] https://news.ycombinator.com/item?id=25249563

[1] https://graphicallinearalgebra.net/about/

[2] https://tauday.com


While a notation for this case makes for an interesting discussion (which the post is), the "blast radius" of this situation is so small that (to me) it wouldn't be worth adopting any of the proposals (not saying anyone there pushed them).

For a notation to be worth adoption,its impact must be far reaching. The "graphical linear algebra" makes a good case with a larger blast radius, for example. However even that has nowhere near the impact of, say, Feynman diagrams.

The x^y notation is repurposed elsewhere - ex: power set of a set S is written 2^S .. with no implication that the "logarithm" of the power set to the base 2 is S. Same for matrices raised to a power, where matrices are usually not thought of as a base for doing logarithms. Same for operators in calculus (ex: laplacian .. now that would be confusing to club with a triangle!)


I'm not sure what you mean by blast radius, but if you were to think about the amount of people learning these concepts - as explained in the SE post here - the order would obviously be the reverse of what you suggest (exponents most important).


By "blast radius" I meant the impact of the notation beyond the originating case into other areas. I wasn't referring to the number of people it would impact. The notation in the OP doesn't play well with "nearby" areas like powerset and matrices .. which are well served by the conventional exponentiation notation.


Speaking of Feynman, I remember when he was in high school he invented his own notation[0] for sin/cos/tan to use little angular lines, but then the issue he ran into was that nobody could decipher his math notation and he had trouble helping others learn, so he conformed to the sin/cos/tan notation.

[0] https://tex.stackexchange.com/questions/274463/feynman-trig-...


> The "graphical linear algebra" makes a good case with a larger blast radius, for example. However even that has nowhere near the impact of, say, Feynman diagrams.

These are sort of the same example! They're both string diagrams (https://en.wikipedia.org/wiki/String_diagram), just in different monoidal categories.


It also reminds me of the Calculus itself. It's often considered one of the best outcomes of Newton and Liebniz discovering the Calculus nearly simultaneously and with little to no coordination until after the fact is that as people discussed the results and compared the two efforts what we ultimately ended up with was a lot of Newton's insights and pedagogy, but based more upon Liebniz's notation.

That has long stuck with me that Liebniz developed a better, more generally useful notation as a tool of thought for following mathematicians, but Newton still developed a lot more lasting insights into the Calculus despite a generally more "inferior" notation (though certainly a notation presumably specifically more useful to Newton's own thought processes).


Kudos for the insightful comment and good links.

Tangent: I first stumbled upon the Tau manifesto several years ago, and though it made sense to me, I couldn't tell whether anyone took it seriously. Still wondering.


I was a Math undergrad when I first read it and I think the main reaction I got from faculty was 'dismissal' and the main reaction I got from peers was 'indifference' and, in one case 'annoyance'.

I think it took root in the pop-math education space [0,1], but overall I think most of the Mathematical world has dismissed it as trivial at best and unnecessarily disruptive at worst.

It's a shame, because I feel that understanding why Tau makes equations and concepts easier for people to grasp and build intuitions around is key to understanding why people generally find maths difficult and frustrating, and can give some guidance on how to bring down that barrier/activation energy.

[0] Vi Hart, Pi is Wrong (https://www.youtube.com/watch?v=jG7vhMMXagQ)

[1] Numberphile, Pi vs Tau Smackdown (https://www.youtube.com/watch?v=ZPv1UV0rD8U)


People are looking at a bigger delta of improvement than that, IMO, but that also means they only want big commits from big names. The Tau thing really is too trivial, which means it may be outweighed by friction alone.


Yeah, the circle constant is considered 'solved' and has been a well understood concept since... (googles) .. oh wow, 2560 BCE [0], with pi being used as the symbol for the last 400 odd years [1]. It's understandable that there wouldn't be much appetite to tamper with that if there isn't a gosh darned good reason.

[0] https://en.wikipedia.org/wiki/Pi#History

[1] https://en.wikipedia.org/wiki/Pi#Adoption_of_the_symbol_%CF%...

(Interesting to note the original coining of Pi was π/δ for what we now call Pi (circumference over diameter) and π/ρ for what we are calling Tau (circumference over radius).)


Tau looks too much like Pi.

It should be Rho for Rotation.


Thanks, Alison.


I think there's more basic opportunity for intuition and learning than pi/tau, which is using turns for angles. We tend to use degrees, which is less direct and requires memorisation/familiarity with magic numbers like 30, 45, 60, 90, 180, 720, etc. It takes more work to translate these numbers back and forth into angles (e.g. clock hands or pie charts) compared to twelfth, eighth, sixth, fourth, half, one, two, etc. turns. I think degrees should be left as a historical curiosity, like gradians, rather than the default we reach for in everyday life, from a young age (I can also see them being a nice 'mental maths shortcut', for those who are into such things).

I know many people struggle with fractions, which degrees can avoid (at least, on a surface level), but I don't know whether the highly composite nature of 360 would help, and fractions hinder, outside of classroom-style exercises. For example, 90 + 45 = 135 is easier to calculate than 1/4 + 1/8 = 3/8, but the latter might still be more intuitive as an angle (e.g. if answering with a diagram, or by turning one's body); and for anything more complicated than that I'd still reach for a calculator, even with degrees.

When problems require more sophistication than turns, we can introduce radians like we currently do. At that point tau makes sense as their conversion factor in equations, but it's still an unnecessary complexity when expressing angles; since any nice multiple of tau (or pi) can be divided through to get an even nicer number of turns.


It's in practice impossible to change a mathematical convention so deeply embedded as π. So even though a lot of mathematicians 'take it seriously' in the sense of agreeing that τ is the more fundamental concept, none 'take it seriously' in the sense of actually using it in public mathematics.


I think the triangle notation is actually terrible; it breaks the = symbol. A fully-filled-in triangle with all three components is essentially an equation, or a set of equations, and I can't think of any other (common) situation in which we hide the equality symbol like that. Having only two of them filled out makes me feel like it's a math problem where we're being asked to fill out the rest of the equation, not an operator.

I also don't like that this is far from the only set of operations that might fit into a triangle of some sort. In fact I've seen math problems from school using it for + and - already. I haven't seen it for * and / but it's easy to imagine. It's possible this notation is already ruined for teaching students by the common core stuff already in use. And the mere fact that the operators can be arranged in a triangle is not sufficiently unique to give the triangle to this particular set of them.

One could argue that the "=" symbol could use a rethink, but I would consider this not a terribly good place to begin that argument just because one set of operators happens to have this particular relationship.

Putting up and down arrows under exponents/roots is also not that great; it looks fine when you have one letter above the arrow but it's not going to scale well. I'd happily argue that standard exponentiation doesn't scale particularly well either once the exponents start getting complicated, but putting another symbol below it doesn't help. Putting them as inline operators flows better, but may hide the lede too much, so to speak; while the exponentiation operator we use today may have some issues, at least it's clearly visible.

Really, the problem isn't the three of exponents, roots, and log, the problem is just log. The whole "three letter operator" thing seems to have a lot of problems; see also the trig functions and their bizarre standards for sticking powers on them (where -1 is supermagical). That said, there probably isn't a problem large enough to be solvable here because the solution isn't going to be better enough to overcome inertia.


Well, we could use = explicitly. Write a blank in an equation to mean "the thing that fills in this blank", ex. [_+1=2]=1. So

  x^y = [x^y = _]
  z^(1/y) = [_^y = z]
  log_x(z) = [x^_ = z]
The first four identities from the post are

  [x^_ = x^y] = y
  x^[x^_ = z] = z
  [_^y = x^y] = x
  [_^y = z]^y = z
The next two are (the nested [] confirm these are more complicated)

  [[_^y = z]^_ = z] = y
  [_^[x^_ = z] = z] = x
Generalizing

  [f(x) = _] = f(x)
  [f(_) = f(x)] = x
  f([f(_) = x]) = x
  [f(_, [f(x, _) = y]) = y] = x


"Write a blank in an equation to mean "the thing that fills in this blank", ex."

Hmmm... I think I want something other than "a blank", but there's some promise there. I feel like your suggestion has the advantage of humbly composing with all the existing notation, whereas the triangle idea itself seems to kinda arrogantly supercede it and rewrite how equations work for just that one operator. (I've add some leading adjectives to indicate how it sort of feels to me.)


The big difference between exponentiation and addition / multiplication is that it is non commutative. This means there are two inverses. The relations between them are not very easy to learn. Using this as a teaching tool could help a lot in explaining this kind of stuff.

I remember back I high-school whenever I met these kinds of problems I would just write everything back into powers and solve the equations in that setting. Because otherwise just dealing with all the interacting and different operations was too annoying. I could see this notation essentially doing that for everyone.

Now, I made it through high-school, and got a degree in math. I think it would be better if more people made it through high school math without hating it. Making logarithms, exponents and roots clearer might help do that.


There's not much inernia to overcome.

It can just be something to show to learners as a visual aid while teaching the standard notation, similar to how kids learn 10 different visual ways to add and multiply.


> It can just be something to show to learners as a visual aid while teaching the standard notation

My son's teacher used number pyramids like that for addition and subtraction a few weeks ago.


While the typesetting might not be very portable or compact, this is beautifully useful for teaching. The two extra identities found appear difficult to prove, but in this notation are blindingly obvious.

I wonder if this can be used to apply to other sets of functions, or if the geometry of chaining functions so can be extended to other such geometrically-obvious proofs.


The two other identities are only tricky because they're using logarithms with a different base and they're writing their roots with the radical symbol. If you write them using log_b(x) = log(x)/log(b) they become:

log(z)/log(z^(1/y)) = y

z^(log(x)/log(z)) = x

the first is obvious from the fact that log(x^y) = y log(x) and the latter is why log(x)/log(z) is also considered the base z logarithm. The only reason it looks nonobvious is because the notation they chose makes it non-obvious that log_b(y) = 1/log_y(b) (and that the yth root of x is x^(1/y)).


I don't know how hard they would be to come up with ex nihilo, but they're fairly straightforward to prove if you know that logb(x) = log(x)/log(b) and log(x^y)=y * log(x).

For logy√z(z) = y

logy√z(z) = log(z)/log(z^1/y) = log(z)/(1/y × log(z)) = y * log(z)/log(z) = y

For logx(z)√z = x

logx(z)√z = z^(1/(logx(z)))

log(LHS) = log(z^(1/logx(z)) = log(z)/logx(z) = log(z)/(log(z)/log(x)) = log(x) = log(RHS)

Therefore LHS = RHS


Although really the first two already have a unified notation, namely x^y and x^(1/y). There are some minor differences in usage between x^(1/y) and the yth root, but those come down to the fact that the yth root isn't uniquely determined.

And there's just 1 logarithm, which has the property log(x^y) = y log(x). You don't need the ones with a different base.


Yeah, their assumption that there's a ternary relation between those immediately breaks down when you consider sets other than the real numbers. And you don't even need to go that far, roots as used on the real numbers don't make sense in C.

The way I see it is that there's only one fundamental function, which is the exponential function, and log is its inverse. Everything else, including a^b, is syntactic sugar. (If you define exp on C, even sin and cos...)

I guess a different notation could have some meaning pedagogically, math notation is incredibly inconsistent at times, but there really is no "deeper truth" here.


Exponentiation does have some scenarios where it can be defined without an (obvious) exponential function, and roots may not be uniquely defined (or rather they are almost never unique), which means you need to be a bit careful, and which means that in theory the 3rd root could differ from the definition of x^(1/3).

However in the cases where you need to be careful most of stuff you'd use the more general notation for wouldn't be applicable anyway, you'd have a high chance of writing down an expression that has no unique value, or can't even be evaluated.


IMHO, the "deeper truth" is addition is translation and multiplication is rotation/stretching (at least when it comes to rings and fields).

It involves getting a geometric understanding of e to the pi i, and 3blue1brown explains it better than I could:

https://www.youtube.com/watch?v=F_0yfvm0UoU [6.28 min]

https://www.youtube.com/watch?v=v0YEaeIClKY [3.14 min]

https://www.youtube.com/watch?v=mvmuCPvRoWQ [25 min]


> The way I see it is that there's only one fundamental function, which is the exponential function, and log is its inverse. Everything else, including a^b, is syntactic sugar. (If you define exp on C, even sin and cos...)

Why do you think this? It doesn't seem to fit historically or formally. 3^2 is a more elementary object than anything built with exp, and there's often no natural notion of exp(A) of a function A even when finite powers like A^3 are defined. exp(A) is defined with a power series that may not converge.


Premise: Forgive the sloppiness, I have some math background but I'm not a mathematician, any correction is more than welcome.

I don't really conceptualize them as the same kind of object, to be honest. I'm aware exponentiation is more fundamental, but the kind of exponentiation you are referring to is related to the intuitive concept of "do this N times", which only makes sense for positive integers.

When you are talking about real numbers, the notion of "repeated multiplication" and "exponentiation" diverge, for example, (-2)^2 is well defined and equal to (-2)(-2), but (-2)^(1/2) isn't, unless you relate it to the exponential function.

Since the OP was about a notation proposal for working with real numbers, in that specific context I believe the more natural interpretation is to relate everything to the complex exponential and work your way up from there.


> the kind of exponentiation you are referring to is related to the intuitive concept of "do this N times", which only makes sense for positive integers.

But it works when "this" is an action that does not have a sensible notion of being applied a fractional number of times, or an infinite number of times, which is a very large and important set of actions indeed.

> When you are talking about real numbers, the notion of "repeated multiplication" and "exponentiation" diverge

Yes of course, but mere continuity of the inputs doesn't pick out exp from any other base you might choose besides e. You do not have to relate it to the exp function.

> Since the OP was about a notation propos...

My specific objection was to your suggestion that a^b is syntactic sugar, which suggests notational convenience that does not reflect what's going on "under the hood".


> My specific objection was to your suggestion that a^b is syntactic sugar

Yes, I agree that 'syntactic sugar' is not the word we are looking for here, I have no objections to your comment.


> but there really is no "deeper truth" here.

I disagree, I do think there's a deeper truth. It seems to me that you have already internalized and understood it very well; that is the goal.

would you at least agree that there's beauty here? and this notation does make it more apparent; or as you say "could have some meaning pedagogically"


The thing is that ultimately notation isn't that important, what matters are the formal concepts at play (and to develop an intuition on how to manipulate them, if you're learning).

If this can help students "see the light", then sure, but I'm not entirely sold on the idea that the notation is actually the hardest part.

This is similar (with a different twist) to the various ideas going around that we should stop using base 10 and think in hexadecimal. It might be better in some absolute sense, but it's not something it's worth spending any energy into.

I'm just not sure I see much from this, but obviously I'm not the "intended target", so please take this with a pinch of salt.


Notation is extremely important for humans.

That's why math notation is used instead of the extremely wordy sentences that were used centuries ago.


Yes, the current notation is sufficiently expressive. That is not what is being discussed here.

The question is, can we find a notation that is much better at teaching the intuitive relationship between these 3 operations. Because notation isn't just about formalizing. Notation is about enabling better understanding. And for a large part, good notation should help teaching. Especially something like exponents, roots, and logarithms which are taught to many. People who happen to not get this probably stop doing math, and we need more people who can do math it seems.


> And there's just 1 logarithm, which has the property log(x^y) = y log(x). You don't need the ones with a different base.

Maybe I misunderstood your comment but it really sounds like you're saying that only the natural logarithm has this property, but in fact it's true with every base:

log(x^y; b) = log(x^y)/log(b) = y log(x) / log(b) = y log(x; b)


The comma is important. Or replace 'which' with 'and it'.


Rejigging their text slightly, what they said was "There's only one useful logarithm. It has this property. It's the only useful logarithm."

Yes I noticed their use of a comma (and "which" rather than "that"), in fact that's why I added the disclaimer at the start of my comment about possibly misunderstanding theirs. But sandwiching a mention of that property in the middle of making the same point twice only makes sense if it's a supporting argument.


> there's just 1 logarithm, which has the property log(x^y) = y log(x). You don't need the ones with a different base.

That's not true. The natural logarithm is the only one whose derivative is 1/x but the identity you wrote is true with any base.

People had been using logarithmic tables to do multiplication centuries before Napier came up with "e".


True, although the fact that all of them are pretty much equivalent does mean that there's no real reason to think of it as a logarithm with 'base' anything.

Also to the best of my knowledge there's no logarithm tables that predate Napier (1550-1617), and when you try to calculate those tables you naturally stumble upon the natural logarithm (or at least a very close approximation), as the easiest way to create a logarithm table is to start with the powers of something like 1.00000001.


Right, I am not sure why I thought that the slide rule predated logarithms.


> You don't need the ones with a different base.

To add on that, we need only one logarithm in the sense that all other then follows. For any base b, we have

log(x; b) = log(x)/log(b)


I think to complete the point, it’s also worth pointing out that for any bases b, c and d:

log(x; b) = log(x; c)/log(b; c) = log(x; d)/log(b; d)

The second equality is why you can ignore c and d, and just pretend there's one logarithm when you do

log(x; b) = log(x)/log(b)


For log, you could also write x ^-1 y if you want uniformity, although it looks rather ugly and confusing to me.


Yeah, let's not. The main function of the logarithm is to transform multiplication into addition, so log(x y) = log(x) + log(y) and log(x^k) = k log(x) are uniform and consistent with that idea.

Also exponentiation is a way more fundamental property than the logarithm so it's weird to place its notation on the same footing.


Exponentiation to logarithm is like addition to subtraction and multiplication to division, which are all pretty much on the same footing.


Perhaps adaptive edtech will enable greater use of variant notations? Two decades back, I did toy context where you could pull-down select the notation of a page, and mouseover to see alternate forms. Personalized notation can compromise communication, but like some students benefiting from Feynmanesce drawing of equations in multiple colors, variant notations might be used tactically, for targeted disruption of misconceptions and such.


Latex/mathjax can prety much do that.

Mathjax already supports toggling between multiple renderers (used mainly for image formats, but could be used for more drastic variants too)


Related video by 3Blue1Brown: https://youtu.be/EOtduunD9hA

EDIT: correct link below.


"This is the corrected version of the one I put out a month or so ago, in which my animation for all the inverse operations was incorrect": https://www.youtube.com/watch?v=sULa9Lc4pck.


This really hammers home the advantage of this notation. It leads naturally to the question, "What is the operation when we leave the bottom right constant?" which Grant calls "O-plus" (tex call it \oplus), which is in fact related to the harmonic mean (which is n times the o-plus of the terms). I don't know if there's a better term for "o-plus" other than "reciprocal sum of reciprocals". Maybe "optical sum" which kind of makes o-plus make even more sense?

https://en.wikipedia.org/wiki/Optic_equation

https://en.wikipedia.org/wiki/List_of_sums_of_reciprocals


> which is in fact related to the harmonic mean (which is n times the o-plus of the terms). I don't know if there's a better term for "o-plus" other than "reciprocal sum of reciprocals"

I would call it the "harmonic norm", which is consistent with is being the "norm version" of the harmonic mean. It might also be called the "(p=-1) norm" since it would be a p-norm with p=-1. Also "L-1 norm" to put it in the "L norm" family.

https://en.wikipedia.org/wiki/Norm_(mathematics)#p-norm


"Though the ancient Egyptians used heap as a general term for an unknown quantity. Diophantus, a Greek mathematician in Alexandria about 300 AD, was probably the original inventor of an algebra using letters for unknown quantities. Diophantus used the Greek capital letter delta (not for his own name!) for the word power (dynamis; compare dynamo, dynamic, and dynamite), which is therefore one of the oldest terms in mathematics. A conjunction has been used to raise a function to a power. This syntax brings out the parallelism between raising a number to a power and applying a function an equal number of times. The algorithm fails when the number of doublings is further increased."

A proposed non-commutative infix binary operator inverse to similar to the non-commutative infix binary exponentiation operator is "[x's] 'log base' [base]":

Operator symbols: [] = implicit; vertical orientation / higher potential = implicit increasing

position on number "line" : | = addition, - = subtraction; two vertices. triangle "ratio" : ▽ = multiplication, △ = division; three vertices. square "The power of a line is the square of the same line" [x^2] : ◇ = exponentiation, □ = log base; four vertices...

[0|]y=y : | = next() grouping operator

[0]-y : - = inverse operator

[0|]y [|]-y=0 : 0 = identity operand

[1▽]y=y : ▽ = | grouping operator

[1]△y : △ = inverse operator

[1▽]y [▽](1△y)=y△y=1 : 1 = identity operand

[y◇(1△y)◇]y=y : ◇ = ▽ grouping operator

[y◇(1△y)] □ y = 1△y : □ = inverse operator

[y◇(1△y)◇]y [◇]((y◇(1△y)) □ y) =y◇(1△y) : y◇(1△y) = identity operand <in the infinite limit = e>


Ideally, we would treat e^x and ln(x) = log~e(x) as the default. We know that a^x = e^(x ln a), and log~a(x) = ln x / ln a. So if we introduced an operator ^(x) = e^x, and another v(x) = ln(x), we could write a^x = ^xva, a√x = ^(vx/a), log~a(x) = vx/va. Common identities would be written thus:

  ^xva * ^yva = ^(xva + yva) = ^(x + y)va

  ^yv(^xva) = ^yxva (note that this is just the identity v(^x) = x)

  ln(a^x) = x ln a is just v(^xva) = xva

  ^yvx = z <=> yvx = vz <=> y = vz/vx

  ^yvx = z <=> yvx = vz <=> vx = vz/y <=> x = ^(vz/y)
(alternate derivation:)

  ^yvx = z <=> ^(v^yvx/y) = ^(vz/y) <=> x = ^(vz/y)
Differentials are thus:

  D(^f) = Df^f

  D(vf) = Df/f

  d/dx(^xva) = va^xva

  d/dx(^nvx) = d/dx(nvx)^nvx = n/x * ^nvx = n ^-1vx ^nvx = n^(n-1)vx

  d/dx(vx/va) = 1/xva


Don't particularly care for any of these. "fill-in-the-blank" notations don't really work for operators. You can drop the exponents and write x^y = z as y ln x = ln z, and then everything commutes and is solved nicely. Same idea (until you get to complex numbers, maybe).

With multiplication, xy = z is solved by y = z/x or x = z/y, which works becuase of the commutativity. If it wasn't commutative, though, you would need to use left- and right- division: x = z/y but y = x\z (I guess), implying y = (x^-1) z.

In the same vein, x^y = z has the radical symbol as a specialized notation to invert it on one side: x = √^y z, which we can parse as a non-commutative operator that acts like f(z) = z^(1/y). But it helps that the raising to a power has an inverse operation that is _also_ raising to a power (x^y)^(1/y) = x. Whereas 'being raised to a power' doesn't have an inverse operation that is also 'being raised to a power'.

The other problem is that when you apply a logarithm operator to a term, powers switch to being multiplied. They 'change domains' in a sense. So it's not possible to do anything to the 'x' in x^y on its own, because that would result in f(x)^y which is still exponentiating by y. You need the 'y' to 'move' into the main line of the equation, out of the exponent.

I think a good way to model this would be to imagine allowing x^y = z shifting so that the 'y' is the main line of the equation, becoming something like 1_x y = 1_z. 1_x and 1_z would ideally have the subscript on the left side, to avoid confusion with other uses of subscripts, and to look like a shifted version of x^1 and z^1. These are literally log x and log z in some base, but they're just numbers, so you can solve the equation as y = 1_z/1_x. Then you have identities like x^1_x = e, so x^(1_z/1_x) = e^(1_z) = z. I think you just do away with the notation log_x z entirely; it's too odd compared to everything else.

So basically I propose y = 1_z/1_x, but I don't think you can reconcile this with the square root notation at all, as they're too different. But it does, at least, keep things consistent with using a division operation for the inverse, akin to x = z^(1/y).


We have a unified notation already?

x^y = z

x = z^{1/y}

\log(x) y = \log(z)

Some of these generalize well to complex numbers/matrices/groups/flows/etc. some don't.


I would like a more unified notation for the concept of "inverse". But I have no good suggestions for how to implement that. However, I have some bad suggestions just to illustrate what I mean:

INV(*) x, instead of 1/x

INV(+) x, instead of -x

INV(f) x, instead of f^{-1}(x)

The last one should really be (INV(°) f) x = f^{-1}(x), where ° denotes function composition as the group operator. But involving this operator in the notation would probably be overkill in most circumstances.


I guess you'd want to do the same for the other operations then (plus/minus, mulitply/divide), and perhaps even generalize:

https://en.wikipedia.org/wiki/Hyperoperation


Exponentiation isn't commutative, so each position on the triangle matters. For multiplication there would be two equivalent triangle states for instance since you can swap xy=z and yx=z. Not saying its a bad idea though.


It doesn’t make me a fan of this notation, but multiplication isn’t necessarily commutative, either.

Matrix multiplication is a simple example (https://en.wikipedia.org/wiki/Commutative_property#Matrix_mu...)


I'm in slight disbelief that I can't find anyone pointing out that the y'th root of x can also be written x^(1/y) - it must be in there somewhere...


So that’s why Johnny can’t subtract. There’s a corresponding problem with + and - .

For a + b = c we should of course write the equivalent a = b - c . If 2 + 3 = 5 then 2 = 3 - 5 .


log_a(b)*log_b(c) = log_a(c)

My favorite identity.


i vote for this notation:

b^p -> b^p

\root p \of x -> x^(1/p)

{\log_b} x -> b^? x




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: