> You can definitely see multiplication as repeated addition
Only if you restrict to whole numbers. But multiplication does not just apply to whole numbers. How would you see multiplication by pi as repeated addition?
> how else would you define multiplication?
As a separate operation with its own properties. The subject article gives several examples of properties of multiplication that are simply different from properties of addition.
Multiplication by rational fractions p/q is just multiplication by whole p, followed by multiplication by 1/q. 1/q is just the number that when multiplied by q gives 1. So you can cash that all out as repeated addition.
Multiplication by irrationals is just an infinite sum of multiplication by rationals, no? x times pi = x times 3, plus x times 1 / 10, plus....
I don't see there's any conceptual issue here. Mathematicians feel free to correct me.
Is not repeated addition. You can't add a number to itself 1/q times. At least, not unless you're willing to adopt increasingly perverse interpretations of "repeated addition" as you try to cover more and more numbers. See my response to wruza upthread.
Define the the rational numbers as ordered pairs of integers where the last number may not be 0. Consider the rational number (a,b) to be equivalent to (c, d) if a*d = b*c
(Exercise for the reader: Prove that this is an equivalence relation)
Now define (a, b) + (c, d) = (ad + bc, bd), (a, b) * (c, d) = (ac, bd)
(Exercise for the reader: Prove that the subset of rationals (n, 1) behaves just like the integers)
Hmm, so just go about it differently: x * p/q with p,q integers is (x*p) / q. The first bit is repeated addition. The second bit means, find the number r such that r * q = x * p. Even if r is not integer, q is integer, so we can try different numbers, add them to themselves q times, and close in on the answer.
So I still think conceptually it's fine to think of it as repeated addition? It might be algorithmically a bad way to do it ("increasingly perverse", taking limits when the number is irrational etc.). I don't know how computers actually implement multiplication - though wikipedia (https://en.wikipedia.org/wiki/Binary_multiplier) says it works via shifts and adds.
Or maybe, it's bad to teach it to kids this way? But then I think we need evidence from educationalists.
So now your definition of "repeated addition" is "repeated addition, plus a version of multiplication". Division is the inverse of multiplication, so your definition is circular: you're "defining" multiplication in terms of repeated addition and multiplication.
Similar objections apply to another poster's contention upthread that the "repeated addition" definition is justified because of the distributive law. The distributive law defines A times (b plus c) in terms of A times b plus A times c. So it's useless as a definition of multiplication in terms of addition.
I don't think so. Look more closely. (x * p) is repeated addition with p integer - we can do this no multiplication involved. Then dividing by q involves finding r such that r * q = x * p. Here, again, we can do this by finding r such that r, added to itself q times, equals x * p. Since q is integer, again, this again involves repeated addition. (It does need an algorithm to find the right value of r. I suppose that trying an arbitrary value, then increasing or decreasing it, would work.)
Yes, that's the "first bit" that I was not raising an issue with.
> dividing by q involves finding r such that r * q = x * p
Which, as you note, requires a trial and error algorithm to find r. But in any case, r is the answer being sought, not the thing we're supposed to be adding to itself some number of times to get the answer.
How would you see multiplication by pi as repeated addition?
By taking 3 times as usual and then another .14 approximately. E.g. for 2, it is 3+3+28/100, roughly 6.28. (edit: if you’re confused by /100, read it as “two places next to a point”)
If you’re asking how to do that exactly, first tell how do you even add pi to some number exactly? Let’s start with one, 1 + pi:
> By taking 3 times as usual and then another .14 approximately
First, "taking 3 .14 times" doesn't make sense, at least not in the elementary school student's understanding of "repeated addition".
Second, pi is irrational, as you evidently realize since you say "approximately". There is no way to add 3 pi times.
> If you’re asking how to do that exactly, first tell how do you even add pi to some number exactly? Let’s start with one, 1 + pi:
Add 1 to the 3 and keep the part to the right of the decimal point the same. Addition is commutative.
Basically, your argument is that multiplication is repeated addition as long as we're willing to adopt increasingly perverse interpretations of "repeated addition" as we expand the scope of the numbers we use. Wouldn't it be better to just admit up front that multiplication is not repeated addition, although the two are similar for some types of numbers?
But what is that number? You only described an algorithm that relies on a vague definition of “part to the right”. See, you can’t even write it down (unlike 1 + 3.14 = 4.14), because pi is not really a number, in a sense. It is an infinite calculation that happens to converge between 3.14 and 3.15. You can never get rid of “pi” in your calculations unless it cancels out naturally, so for practical reasons it’s 3.14 and for theoretical reasons it’s “pi”.
Let’s make it clear: you cannot add pi or take pi times at all. You only can add to factors to the left of it (same as a+bi). That’s why you cannot multiply by pi by repeated addition, not because the numeric method is wrong.
Wouldn't it be better to just admit up front that multiplication is not repeated addition, although the two are similar for some types of numbers?
For generalized reasoning, yes. For teaching, not sure. Which is better, a student who knows that mul is rep-add, or a student who gave up and doesn’t know even that? When they learn, they ask themselves why and it’s a rabbit hole. You should stop somewhere before turning into maths professor, if that’s not your goal.
Pi minus 3. If you are saying you can't actually perform that operation, great! That means you are agreeing with me. See below.
> you cannot multiply by pi by repeated addition
Thank you for agreeing with my main point. See below.
> not because the numeric method is wrong.
If someone is going to claim that multiplication is repeated addition, then their definition of multiplication in terms of repeated addition must cover all cases. You are saying it doesn't cover pi, which means it doesn't cover all cases. So the definition is wrong.
> For generalized reasoning, yes. For teaching, not sure.
Why not? What's wrong with telling kids, multiplication in general is a distinct primitive operation, we are teaching you how to multiply whole numbers using repeated addition, but be aware that this method will not generalize to all cases?
> Which is better, a student who knows that mul is rep-add, or a student who gave up and doesn’t know even that?
A student who has been told what I just described above is not in either of these positions, so you are arguing against a straw man.
> When they learn, they ask themselves why and it’s a rabbit hole.
No, it's natural human curiosity which should be encouraged, not stomped on. At some point they'll either realize that they're up against material they're not yet ready for, and put it aside for later, or they'll end up being a math prodigy. Both outcomes are better ones than them just being told "this is it, don't ask any questions".
I think what they are saying is that if we can't say that multiplication is repeated addition due to the reason you propose, then addition itself is ill-defined as a concept, which is a bigger problem.
That by using your argument we move the whole debate one level down the fundamental operations and now you no longer have a way to simply explain addition or subtraction.
Only if you restrict to whole numbers. But multiplication does not just apply to whole numbers. How would you see multiplication by pi as repeated addition?
> how else would you define multiplication?
As a separate operation with its own properties. The subject article gives several examples of properties of multiplication that are simply different from properties of addition.