Multiplication of real and complex numbers is typically defined by starting from repeated addition, extending this notion to rationals, and then extending that notion to reals by taking limits.
How exactly are you going to present multiplication of real numbers axiomatically without essentially including an axiom that bootstraps everything from repeated addition?
I suppose you can try defining the reals as "the unique complete ordered field" or the complex numbers as "the unique algebraically closed field of characteristic zero with cardinality c," but I don't think either of those are pedagogically useful to someone who is still learning what multiplication is.
Multiplication of the Surreals is a recursive operation using sums (addition and subtraction on the left and right sets). Since the Reals are a strict subfield of the Surreals one can define multiplication of the reals using only the same recursive formula and restricting both operands to be Reals.
Its probably worth emphasising that the recursion you need to "construct" the Surreals is infinite, in other words this does not give a reasonable algorithm to (for example) add two real numbers, you need S_omega in order to have even all rational numbers.
The construction is rather involved but if we're only interested in the reals for now you can think of it as defining a real number by a set of rational numbers, in particular define a particular "real number" to be the set of all rational numbers less than it, for example sqrt(2) is defined to be the set of all rationals p/q such that p^2/q^2 < 2. We can "recursively" define addition of these real numbers in terms of addition of rational numbers because to add two reals you "just" have to add all the rationals in their respective sets.
In general there is no sensible algorithm to do anything in the real numbers, since most real numbers aren't even computable (there is no way to represent an arbitrary real number on a Turing machine).
Of course, once you decide to iterate over uncountable sets, infinity starts to appear all the time.
This isn't the only case of infinite calculations that can't be computed in practice but that mathematics use all the time anyway; and it reflects quite well the fact that multiplying irrational numbers isn't something that one can do practice. There is no problem with it.
Multiplying algebraics is trivial. A rectangle with sides sqrt(3) and sqrt(2) has area sqrt(6), which can be approximated by a decimal if needed.
There are countable/computable/constructable subsets of the reals where multiplication has a finite algorithm and is it repeated addition.
One example is the algebraics, as well as extensions the including a few special constants like pi. These are the subsets of the reals most commonly used for math and science. So in a wide range of problems areas, multiplication is not just repeated addition.
I definitely wasn't trying to indicate that there was a problem with this, just pointing out that the path of using the Surreals (or anything else) to give an "algorithmic" description of real addition or multiplication is probably a bad idea.
I wasn't even attempting to give an "algorithmic" description. Just a formulaic one, that depends on the axiom of infinity (and accepting transfinite induction as a valid process). Since even at least one of the usual process for constructing the Reals (Dedekind cuts) needs this I don't feel it's much of a stretch in reasoning. And the Surreals have a nicer recursive formula for multiplication that eventually turns into repeated sums, and they're a strict superset of the Reals, so it does apply there.
If you want an algorithmic (no possible need an infinite number of steps) explicit construction of anything on the Reals you're going to be disappointed. They're an infinite set.
You can get algorithmic explicit constructions of (for example) integer addition, rational addition and multiplication and even addition and multiplication of algebraic numbers on a Turing machine (all algebraic numbers are computable). All of these sets are infinite. The reals are particularly "badly behaved" even as far as infinite sets go.
Not algorithms. There will be infinite addition involved, and algorithms are finite.
Thinking of multiplication as repeated addition also won't explain anything about it. It's a separate operation. Deal with it. For similar reasons, you can't calculate x-th power of a number, when x is irrational, by decomposing it into exponentiation and roots.
This metaphor is just training wheels. At some point you should lose it.
Together with the notion that "multiplication is repeated addition" comes the notion that numbers are quantities. Only some of them are, and this isn't really what makes them numbers. Now what exactly gets repeated, when you don't have quantities?
Which is a technical way of saying "in real life people use finite rational or algebraic approximations for reals, so uncountability of reals and infinite precision aren't a problem".
Consider the following real number made of binary digits:
Enumerate all Turing machines and all possible inputs, iff the i-th machine/input combination holds, the i-th binary digit in our number is 0, otherwise 1.
This number is well-defined (once you fix your enumeration scheme).
But there's no finite algorithm to produce approximations in your sense.
What I was after were what's also called Computable numbers (https://en.wikipedia.org/wiki/Computable_number). But I used the more general term of co-recursion, that also applies to arbitrary other data-structures like infinite lists, or with some generalization, infinite event-loops where the important condition is that each run through the body of the loop only takes finite time.
Algorithms can work on symbolic formulas, and symbols can represent anything; infinite objects, operations on infinite objects, infinite sets of operations on infinite objects, and so on.
> In mathematics and computer science, an algorithm (/ˈælɡərɪðəm/ (About this soundlisten)) is a finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation.
Newton's method is finite too. You perform finitely many iterations. It doesn't calculate roots. It calculates their approximations.
If you use a termination condition that has to do with convergence of iterates instead of a number of iterations (often the case), then you generally don't know beforehand the length of the finite sequences. Maybe you know a bound, but in general you might not even have that.
In an important sense, it only becomes a finite algorithm. It isn't one. You cannot write the finite sequence of instructions down. It's got loops.
To your point about approximations vs not, if you have an algorithm that, for any desired approximation accuracy can compute the square root to that accuracy in a finite number of steps, then that process is as much "the square root" as anything involving the real numbers.
> To your point about approximations vs not, if you have an algorithm that, for any desired approximation accuracy can compute the square root to that accuracy in a finite number of steps, then that process is as much "the square root" as anything involving the real numbers.
Not really, since approximations, no matter how accurate, don't preserve algebraic properties. You only get to know what it's bigger/smaller than.
I think I understand what you mean, so let me dial back "anything involving the real numbers".
If you are representing or thinking of "sqrt(2)" as "the positive solution to x^2 = 2", then you preserve algebraic properties. But you generally (correct me if I'm wrong) don't get to know whether it's bigger or smaller than something else of the form "the _choose_uniquely_ solution to _some_equation_" unless you rely on an argument where you invoke approximations.
Well, no, not really. The standard definition of the reals is as the unique nontrivial totally-ordered, Dedekind-complete, Archimedean field up to isomorphism.
So what you would really need is a uniqueness proof, with addition and multiplications "provided" by the hypothesis.
And how do you prove that a totally-ordered, Dedekind-complete, Archimedean field does not lead to contradiction besides constructing it explicitly by bootstrapping from natural numbers?
Nothing in the construction requires you to show an algorithm that given x, y \in R allows you to compute x+y and xy. You would make the usual Dedekind construction and show it satisfies the axioms of such a field. (as a matter of fact, no such algorithm exists in full generality!)
It's probably a tomato/tomato kind of thing, but I'm only objecting to the 'algorithm' part of parent's comment.
>Only for rational numbers. Doesn't work for real and complex numbers.
Complex numbers aren't really relevant, in my opinion, because they are usually introduced as an extension of the rules for reals and polynomials. To multiply two complex numbers, you can totally forget that i is imaginary, do the multiplication as if it's just an ordinary variable, then substitute "i" back in. But that relies on being able to multiply polynomials, which would be difficult to define in terms of repeated multiplication.
To some extent all of mathematics is a lie. We can do multiplication on the reals because we have decided that it's allowed. It is reasonable to define multiplication at first as repeated addition and then define a way to extend that to the reals that is consistent with the first definition.
Only for rational numbers. Doesn't work for real and complex numbers.
> It's even a useful thing to do, how else would you define multiplication?
Axiomatically, not algorithmically.