The author is mixing up physics (dimensional analysis) and maths, and trying to give the multiplicand a special role (I didn't even know there was a distinction between the two terms - to me they are both factors). This might be true in the physical world, but in the world of numbers, I think the distinction is irrelevant.
Furthermore, being able to compute/define multiplication through repeated addition doesn't prevent you from looking at the special properties of this new operator.
I have a PhD in physics and more maths qualifications than I can shake a stick at; to me, multiplication is repeated addition.
I’m not sure what the teacher is trying to do here, but I do think the outcome of what they’re trying to do is far more complicated than the simple “multiplication is repeated addition”.
I also happen to have an 8-year-old going through third grade right now, and when we were talking through his homework, it was quite clear that using simple concepts he already knew (addition & subtraction) to explain slightly more complex things that he was learning (multiplication and division) was really useful to him. As I recall it being to me.
[aside] I think the maths schedule is more advanced now than it was in my day anyway - he only did multiplication and division this year, but he also did algebra and simultaneous linear equations now, as in:
a + b + 8 = 24
a - b = 4
“Solve for a and b”
Pretty sure I only did that in senior school (11 and up), not at age 8. No powers as yet (presumably they’ll come after the multiplication/division stuff), so no quadratic formula, but still...
I think the useful distinction is that, when you teach multiplication as a mechanical computation (arithmetic) it's useful to talk about it as repeated addition.
As you get to negative numbers, rational/irrational numbers, complex numbers, matrices, etc. it becomes more useful to think about multiplication in more abstract ways, among which repeated addition is still often a useful way to look at it.
I also think it's not particularly useful to talk about those other ways to think about multiplication until you actually need to.
It's too easy once you've mastered the concepts to forget how beginners look at them and struggle to understand them. I think the author isn't remembering what it's like to try to understand multiplication as a new concept - I certainly can't remember.
I think the heart of the issue is whether it's more useful to teach children how to multiply two abstract numbers together as a kind of "mathematical procedure" that they need to memorize, or whether it's more useful to teach children that if they measure two sides of a square with a measuring tape, they can "multiply" the measurement and that the result is now in "square inches" rather than regular inches. And the schism is that some people believe that procedural memorization is useful because after 20+ years of education they've gotten through the good part, and other people believe that the procedural memorization does kids a disservice by divorcing mathematical thinking from the concrete world entirely.
And the schism is that some people believe that procedural memorization is useful because after 20+ years of education they've gotten through the good part, and other people believe that the procedural memorization does kids a disservice by divorcing mathematical thinking from the concrete world entirely.
In my very limited teaching experience, I think the answer differs based on the student. At the individual level I don't think there's much controversy. Just align with the student's learning style. At scale, I have no idea and do not have the data/experience to have a well-formed opinion.
I was perfectly happy to focus on mechanical mastery of the multiplication rituals well before I had any concrete reasons to use them. I know plenty of others didn't work that way.
Your last two paragraphs are, I believe, the crux of my own argument. Sure, matrices aren’t even commutative, and complex numbers have their own quirks because of i^2 == -1, but these concepts build on the earlier axioms the kid has learnt. Our entire education system is built on “lies-to-children”, and as you progress they point out that what you comfortably believed was a gross simplification. This is no different.
Speaking of “lies to children”, and since you are a physicist, it was only in my last year studying EE that Feynman’s QED came my way and just imagine my surprise on finding that photons do not travel in a straight line.
I have got almost exactly the same situation: PhD, physics, forgotten more math than most folks ever learn, etc., and with an 8 year old learning the same level of mathematics as you describe.
I think the only difference might be I used "iterated" rather than "repeated" when helping him. Anyone who is just learning multiplication likely lacks the depth of experience necessary to make use of the "correct" jargon and abstract concepts as a starting point. "Repeated addition" is a useful aid in learning the operation to build that experience.
Another physics PhD chiming in here. I have never before noted the difference between "multiplier" and "multiplicand". The whole article has me rolling my eyes.
In fact, I would argue that multiplication being associative shows that this distinction is meaningless.
Since you're a physician, do you think that helps for multiplicative relationships in real world laws ? It took me decades .. sadly, to be comfy with handling U = RI formulas. On the algebraic level it's stupid simple, but for real world physics the meaning is more bidirectional coupling of ratios and amplitudes and taps into a different part of my brain.
Physicist, not physician, but really - I haven't used my physics knowledge directly in a few decades now... I've been a software engineer for most of my life :)
As for V=IR (I had to google U=RI, maybe U is the more modern version, but it was always V=IR when I were a lad), I don't really have a problem with ratios. When I was learning equations, the simple rule is "do unto one side whatever you do to the other", so ...
V = IR, divide by R -> V/R = I
I was happy with either representation, and I didn't think of it as multiplying, dividing, adding or subtracting, it's just "do the same thing" on each side. The problems I had were more "when do you apply Kirchoff's laws to figure something out, and when do you apply Ohm's law; that sort of thing you just get by experience, I think.
what I meant about the ratio part is that physicists don't care much about the computation they care about whenever you know one piece, you know the other will variate through a third factor in a multiplicative manner.. maybe it makes sense, but it's not at all a structural notion like (x) === iterate(+,n)
Agreed. Multiplier and multiplicand are different words but the commutative property says their values can swap equivalently, so . . . what was the author's point again?
If, for a moment, you conceptualize of multiplication on non-negative whole numbers as repeated addition, then this is the algorithm:
procedure product(multiplier, multiplicand)
acc := 0
for i = 1 to multiplier
acc := acc + multiplicand
end
return acc
end
Swapping the arguments is a different computation. But after thinking about it, you realize that you get the same answer all the same. That's the point of
> (2 rows × 3 chairs/row) is not the same as (3 rows × 2 chairs/row), even though both sets contain 6 chairs.
The point with bringing up dimensional analysis is that the above algorithm doesn't work because what does it mean to do `for i = 1 to 3 chairs/row`? You might think of it like
procedure product(multiplier, multiplicand)
acc := "0" # an "absolute" zero that cooperates with any dimension
each single_multiplier in multiplier
acc := acc + (multiplicand * single_multiplier)
end
return acc
end
But then what is `(multiplicand * single_multiplier)` ?
Multiplication over the reals is commutative. Matrix multiplication of non-square matrices isn't. Multiplication in a Ring isn't necessarily commutative. Other algebraic structures also have non-commutative multiplication.
One could argue that these things aren't "multiplication" even if they are "products" since they don't satisfy all the properties of multiplication over the reals. But it is common to call the use of the product operation "multiplication", at least in cases where there's only one product operation to use. EG Geometric Algebra has Inner, Outer, and Geometric products, so calling them "multiplication" seems less common IME.
I really don't get the point you're making. If we're going to pull out random examples, monoids aren't guaranteed to be abelian; strings and concatenation form a monoid that's not abelian.
If you have enough mathematical sophistication to conceptualize a non-commutative ring, you're well past the point where naming conventions are even remotely an issue.
The original article was contrasting addition and multiplication on the basis that addends are called the same while factors are supposed to be called differently, which not only makes no sense (it's just a naming convention), but it also breaks down when you have more than two factors: what is the "c" in a x b x c called? Or we're talking about non-associative operations now?
My point is only that the commutative property is not inherent to all multiplication operations, so there can be a distinction between the operands. It's not necessarily a useful distinction, and in the usual use of multiplication it's utterly useless and only adds confusion.
But matrix multiplication is taught in high school (and usually promptly forgotten), it's not particularly advanced math.
Personally I'm of the opinion that the terminology is muddled. There's no need to distinguish the operands of a multiplication over any of the usual domains (reals, rationals, integers, etc). And when you reach the point where it does become important there's generally more than one product operation and we should stop calling it multiplication. "Matrix multiplication" is a bad term. You also typically wouldn't name the operands, since as you note there can be more than two!
Sure, my counterpoint was just that the same reasoning technically applies to addition as well, but as you note "matrix multiplication" is taught in high school, while my example wouldn't come up.
I agree not calling it multiplication would probably help, perhaps something like "linear transformation composition" might encourage students to keep it separated from real multiplication, I jsut found the argument in the original article kind of ridiculous to be honest.
Yea, math tends to reuse terms and notation across disciplines in ways that adds confusion rather than clarity. It’s much better to think of infinity for example as multiple independent concepts than assume it’s all the same idea.
They exist to distinguish the element being operated on (the LHS) and what it is operated on by (the RHS).
Total technicality, but I could see myself using the term multiplicand/multiplier in my code if I had to implement e.g. a stack-based parser for arithmetic expressions.
I agree that it's a generally pointless distinctions, although it might be useful in some cases such as number systems where the multiplication isn't commutative, or for a particular implementation where the distinction matters.
After all if you were to code a multiplication that was implemented naively as a series of additions it'd be generally much faster todo 2x1000 than 1000x2.
To return to TFA I think the author is talking from a pedagogical standpoint, that teaching that multiplication is just a bunch of additions under a trench coat is not the best way to go. I'm not sure that I agree personally.
In particular this bit regarding multiplier/multiplicand makes zero sense to me:
>Different names indicate a difference in function. The multiplier and the multiplicand are not conceptually interchangeable. It is true that multiplication is commutative, but (2 rows × 3 chairs/row) is not the same as (3 rows × 2 chairs/row), even though both sets contain 6 chairs.
Of course 3 rows and 2 rows aren't the same, but what does it have to do with the order of the multiplication? Isn't 2 rows x 3 chairs the same thing as 3 chairs x 2 rows? It's a bizarre argument.
Suppose your friend Alice arranges your wedding. You ask her to arrange the lawn chairs in an arrangement of 2 rows and 3 columns. But she misinterprets your request as 3 rows and 2 columns. Oops. Now a particular family can't all sit in a single row without rearranging the chairs.
If all you care about is the total number of chairs, the order of operands is irrelevant. but if you care about the structure, "2 x 3" may encode information that "6" does not.
I address that point specifically in my comment. What you say is that the units of the operands matter, which I agree, but the order doesn't. 2rows x 3chairs and 3chairs x 2rows is the same thing.
The point isn't "matrix multiplication != scalar multiplication". The point is "the process of evaluating a reducible expression inherently discards information about the original expression", which is a fact about evalution rather than any specific operator. The fact that "the information discarded is of little consequence to the compressed result" is a quirk specific to scalar multiplication. Thus, the commutative property distracts from explicitly modeling the "AST" so that the student understands what multiplication represents under the hood, beyond the rote memorization of scalar multiplication tables.
Perhaps an analogous situation would be: Suppose a teacher wanted to introduce the notion of limits to a calculus curriculum. "That makes zero sense. The only things a student needs to know are the shortcuts for each parent function, e.g. that (d/dx x^2) reduces to (2x) via handwavey magic." But what if an engineer needs to integrate over an arbitrary curve? Can students solve the problem without being comfortable with Riemann Sums? Maybe 1st-year calc students should rederive the shortcuts from scratch? "Except we're talking about a math course, not an engineering course."
> In particular this bit regarding multiplier/multiplicand makes zero sense to me.
> Isn't 2 rows x 3 chairs the same thing as 3 chairs x 2 rows? It's a bizarre argument.
It's bizarre to simias (and you, I assume) because y'all can't imagine performing the operation without thunking. (Don't get me wrong, I think "repeated addition" is the best method. I'm just attempting to explain the opposite perspective so that it feels less bizarre.)
You're correct. The GP in incorrect about their dimensional analysis. 5 apples 12 times yields apples because it's apples times a dimensionless scalar (count). Newtons times meters is always Newton-meters, never Newtons or meters. Units are never magically dropped in dimensional analysis.
> (2 rows × 3 chairs/row) is not the same as (3 rows × 2 chairs/row), even though both sets contain 6 chairs.
The author has it all backwards. Basically that you can turn this arrangement by 90 degrees (and turning the chairs, if you do not modell them by points by 90 degrees) is the reason why multiplication is commutative. It is non trivial that 5+5+5+...5 (100 times) is the same as 100+100+100+100+100
You first define multiplication of natural numbers to be repeated addition, then define multiplication of rationals in terms of multiplication and addition of naturals, then define multiplication of reals in terms of multiplication of Cauchy sequences of rationals ;)
You are making a mistake that many students unfamiliar with abstract algebra make, which is to confuse a particular "encoding" or "implementation" with the algebraic structure.
The concept of a real closed field [0] (and its categorical second-order version, the Dedekind-complete ordered field) stands on its own, without multiplication being defined in terms of repeated addition. It is completely independent of whether you happen to encode the reals as Cauchy sequences, Dedekind cuts, or something else. The (equivalence classes of) Cauchy sequences are not the same thing as the real numbers, even if we sometimes abuse terminology in this way for expediency. The distinction becomes increasingly important as you delve into more exotic algebraic structures.
Another illustrative example is the Hessenberg product [1]. Even the ordinary product cannot really be reduced to "repeated addition", because you have to use the infinitary concept of a limit. And not just your everyday limit [2], but a limit on a proper-class sized domain [3]!
I think its easy to get too hung up on particular definitions, you're free to define the reals to be the isomophism class of Dedekind-complete ordered fields if you like, I'm free to define the reals by some construction (Cauchy sequences, Dedekind cuts or whatever). As long as the chosen definitions are isomorphic it doesn't matter at all.
In general in a field I have addition and a multiplicative identity 1, as well as the distributive property, which means that for all elements x satisfying x = 1 + 1 + .... 1 for some number of ones I can write a.x = a.(1 + 1 + .... 1) = a + a + .... a, so there is always a subset for which multiplication works like repeated addition.
There is not one Dedekind-complete ordered field, for every two Dedekind-complete ordered fields there is a unique isomorphism between them. For example the nxn diagonal matrices with real entries with all the entries along the diagonal equal are a Dedekind-complete ordered field.
I don't know why you are conflating the words "compute" and "define", do you understand how these are different words? I was responding to how to "compute: \pi*\pi using repeated addition", this is rather different to defining \pi*\pi as a repeated addition.
> There is not one Dedekind-complete ordered field
Yes there is.
> For example the nxn diagonal matrices with real entries
That's a model, not the theory.
> I don't know why you are conflating the words "compute" and "define"
This whole discussion is about definition.
Your own comment started with "You first define..."
And no, you are not able to compute arbitrary products of arbitrary reals in this way anyway, if by computation you mean a finitary algorithmic process.
Ok, I think your definition of "field" is different to mine. Mine starts "A field is a set F with binary operations + and x" and then goes on to list some properties they have to have, yours seems to be doing something different. If you start with mine you get a whole bunch of different complete ordered fields, but you can easily show they're all isomorphic (e.g. Spivak does this IIRC).
>This whole discussion is about definition
No, it isn't this whole discussion is answering the question about computing pi * pi. Maybe I was slightly sloppy in my first answer to that question, I was only attempting to sketch the method.
I don't really want to compute products of arbitrary reals in that way (or any way), but the method I sketched works for computable reals which is sufficient to cover the case of \pi. By computation I mean something that runs on a Turing machine, but I don't assume finitary (I'm happy for my Turing machine to keep producing more and more bits of precision forever).
Exactly. I've scrolled through hundreds of comments here now and it really is beyond me how the question of whether you can define x·y in terms of addition for x and y being arbitrary reals is even a matter of debate.
The question is not whether you can define it in terms of addition in some abstract way, but whether you can define it in terms of repeated addition, i.e. something of the form x + x + … + x.
> The question is not whether you can define it in terms of addition in some abstract way
This is splitting hairs. The reals themselves are defined "in some abstract way". For instance, what does addition of reals even mean? How do you add two arbitrary real numbers? Exactly, by adding the elements of the Cauchy sequences. (And similarly for multiplication.) This is the definition of addition (multiplication) and the only abstraction involved here is the abstraction due to the way the reals are constructed in the first place.
You need the concept of a ratio, so arguably I'm using multiplication to define multiplication, but you're sort of cheating by asking about a fractional number.
Pi is definitely not a fractional number... I don't think it's cheating at all, multiplication on the naturals is repeated addition, that's not the case for the reals.
Sure it is, as long as you're willing to repeat in increments of real numbers.
And that's my point, basically - By the time we're discussing real numbers, we need multiplication as an operator, because we're discussing ratios already.
I wouldn't call the fact that pi is irrational an insight of "early mathematics." We knew that sqrt(2) was irrational around 500 BC if not earlier. We didn't know pi was irrational until around the time of the American revolution.
Interesting how computers (which only understand ‘1’ and ‘0’ can do multiplication of reals, then. Unless you’re intel, of course... (no, I will never let it go :)
Can we not consider that the algorithm taught for multiplication of real numbers is repeated multiplication of natural numbers which can be seen as repeated addition of natural numbers so we could define an addition only algorithm for multiplication of real numbers.
The author is mixing up physics (dimensional analysis) and maths, and trying to give the multiplicand a special role (I didn't even know there was a distinction between the two terms - to me they are both factors). This might be true in the physical world, but in the world of numbers, I think the distinction is irrelevant.
Furthermore, being able to compute/define multiplication through repeated addition doesn't prevent you from looking at the special properties of this new operator.