Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Wolfram Alpha Is Making It Extremely Easy for Students to Cheat (wired.com)
117 points by spacelizard on July 7, 2017 | hide | past | favorite | 154 comments


When I was 18, back in 1999, I had an internship at WRI (Wolfram Research) in Illinois. I'd applied armed only with a library copy of the Mathematica book, so they sent me a CD with Mathematica on it. I made some demo things and got a slot, and flew to Champaign.

I worked on polyhedra for a summer, writing code that could unroll a polyhedral model to its 2D net. Find the volume, the number of faces and all kinds of stuff. I met a bunch of interesting people and it was a blast.

I also fell asleep at my keyboard more than once. It was a beautiful summer, biking to work and working with what I still think is one of, if not the, best language ever.

Here's why all this is relevant - I came back to real life to study CompSci on these old Sparc machines. And it was like, here's the power button. What's an object in Java? What's a compiler? All reasonable stuff.

But: Wolfram Research and Mathematica had, in a sense, ruined my undergraduate life before it started. Why were we using all these bizarre tools? Can't we do this a million times faster? Why are we learning all these bizarre integrals?

It was similar to being denied graphing calculators in A-Level Mathematics (in the UK, think high school). I get it - we need to learn 'the basics' and survive without tools to some degree. But, it would have been nice to use them in some contexts and not just deny their existence.

There's an anecdote I think about Milton Friedman being shown people building a dam with shovels and not digging machines, to keep people employed in some God-forsaken country. He asked, why don't you use spoons instead? Then, more people would be employed.

Mathematica and Alpha are wonderful tools, and I highly recommend applying for an internship if you're of the right age or whatever the requirements are today.


> we need to learn 'the basics' and survive without tools to some degree

This is the wrong approach. There's no way you wouldn't understand math much better with these tools.

I'm hoping that in the future, math will be less about equations and symbols and more about graphing and being able to move around in the spaces described by the equations.

I would draw an analogy with a compiler. After using it for some time, your brain will take on the shape of the compiler and you'll write correct (lol is it ever tho) code without even having to compile it.


Both are true.

Clearly we are more productive with the tools. However it is very, very easy for people to see the tools as magic. At some point we need to actually understand what it is that we are doing. For which those equations and symbols are essential.

Yes, the computer can draw a pretty picture. Pretty pictures are helpful in conveying information. But they are a horrible way to understand inherently complex topics.

For example pictures are essential for conveying basic concepts in in multi-variable calculus. But you won't make much sense of the topic until you actually understand the three basic mathematical representations of a surface embedded in a higher dimensional space (function, level surface, and parametrized coordinates), how each connects to the tangent to the surface at a point (whether that tangent is a line, plane, or something higher dimensional). And you need to understand this in an n-dimensional way because that comes up, a lot.

So no, we won't lose equations and symbols. Ever. They are essential, and there is no possibility of real understanding without them.


> we won't lose equations and symbols. Ever. They are essential, and there is no possibility of real understanding without them.

i don't know.

sometimes i think these symbols and equations are just machinery created by deeply talented, deeply insightful people to help the rest of us arrive at correct statements and give us a little glimpse of a vast panorama of truth which their minds see intuitively, unaided by all the symbolic clutter.


Yes, and no.

Good mathematicians and scientists that I have personally known have a wide variety of styles of understanding and thinking. The first and most basic divide in mathematics is between people whose natural inclination is algebra versus analysis. The best at analysis seem to have a deep intuition like what you project. The best at algebra operate pretty directly with symbol manipulation.


> So no, we won't lose equations and symbols. Ever. They are essential, and there is no possibility of real understanding without them.

I didn't say get rid of them. But I think that if you see a complex equation, you should be able to roughly imagine it in your head. Working only with symbols won't get you there.


But I think that if you see a complex equation, you should be able to roughly imagine it in your head. Working only with symbols won't get you there.

You're wrong. To take a real example, visualize G_{\mu \nu }+\Lambda g_{\mu \nu }={8\pi G \over c^{4}}T_{\mu \nu }. I dare you. That's Einstein's field equations for General Relativity. I don't visualize a system of 2nd order partial differential equations involving many dimensions at each point in a 4-dimensional coordinate system. Do you?

Furthermore people differ on how visual they are. If you're someone who has to visualize things to understand them, you're hardly alone. Lots of people are like that. But then again there are people like me whose thinking is almost entirely non-visual. If I'm dealing with something abstract, not only do I not think visually, but a purely visual explanation doesn't really help me much.


Hmmm. I would disagree with you. It is impossible to visualize the equations as that isn't possible to do in 3 dimensions, but it is possible to develop an intuition. After taking functional analysis, for instance, I began to develop an intuition for how function spaces work, and was able to visualize that to a certain extent (for a given definition of "visualize").


You get it. Few people understand what's happening in every dimension at every point, but getting a feel is doable.


The whole argument is that its not doable without the right tools and that visualizations are a very good tool for this.


You might be thinking visually without even recognizing, because to call that kind of imagination visual (as in optical) is misleading. A Hydrogen 2 molecule has 6 degrees of freedom already, give that color and you have 8.

There was a story on HN about activity in the "visual" system of blind people. Here is a similar story google spit out: https://news.ycombinator.com/item?id=14720225

Commutative diagrams are a development that has applications in algebra, too. I'm currently reading Physics, Topology, Logic and Computation: A Rosetta Stone (J. C. Baez, M. Stay) https://news.ycombinator.com/item?id=12317525


I mean this particular equation not really no but that might be because I haven't done much general relativity. But I can kinda visualize other somewhat complicated equations.

Sure but then you are giving an advantage to the more visual people no? Can't I just flip your argument?

Yes, symbols are useful but you should have an understand of what they are actually doing.


Sure but then you are giving an advantage to the more visual people no? Can't I just flip your argument?

Sure, you can flip the argument. You'll be wrong, but you just did it.

People naturally lean towards understanding using different methods. Each mode of thinking has strengths and weaknesses, and so do people. Some things are better understood visually. Other things not. Some things have the best way of understanding varying by person.

So yes, on some things you'll have an advantage over me because you're visual. On some things I'll have an advantage over you because I probably understand complex abstract relations more naturally than you do. And your insistence that everyone is best off trying to think like you do is completely misguided.

Anyways I've explained my position enough. If you don't wish to get it, you won't. I'll let other people take over the task.


> "I think that if you see a complex equation, you should be able to roughly imagine it in your head."

This breaks down in many cases. What if your equation involves complex numbers for each component? What if it's an eight dimensional space or perhaps a multi-hundred dimensional space for recommender system?

At some point, you have to be comfortable with more abstract representations.


I think of complex numbers as just points/vectors.

As for multidimensional spaces, you might not be able to visualize all the dimensions at once but by doing them three dimensions at a time, you can go Pretty far.

You should check that the abstract representations still make sense intuitively.


> Clearly we are more productive with the tools. However it is very, very easy for people to see the tools as magic. At some point we need to actually understand what it is that we are doing

And that's why computer science needs to be part of the math curriculum in schools asap.


Please, no.

Or at least not unless it is done well.

The problem is that we start with a reasonable idea like, "People should understand how the tools work." We make an obvious observation like, "Understanding computer science helps people understand how the tools work." Come to a conclusion like, "Computer science needs to be part of the math curriculum in schools asap." This turns into a mandate for educators who themselves have no understanding of computer science. Who then ask the question, "What does everyone need to know about computers?" Who then consult with what seem to them like appropriate experts. Soon you're hearing about how we'll have interactive computer science courses to guarantee that children have familiarity with how computers work. And to fanfare these are rolled out in schools.

Then you go and look at what is being done. A teacher who clearly doesn't actually understand how computers work has kids using a variety of interactive programs, ranging from Microsoft Word to animated presentations. They are calling that "computer science". No actual understanding is imparted. And the exercise reduces time available in the core curriculum that could have spent on things like quantitative reasoning. You know, stuff which ACTUALLY can lay the foundation for understanding how the tools work.


> A strawman who clearly doesn't actually understand how computers work

Fixed that for you. I had good informatics teachers 10 years ago.

> quantitative reasoning

As in complexity of sorting functions? We did that, non-rigorously.

I just don't see how, say, meiosis and mitoses or vulkanism are any more mandatory to learn than e.g. Codd's normal forms or the workings of an ALU.


I described what I actually witnessed happening to my children in supposedly good California schools. It fits a pattern that has been frequently seen over a long time with a variety of technical subjects that were pushed to schools, starting with the New Math debacle back in the 50s and 60s.

I'm glad that you personally had a better experience. I believe that I described something closer to what we should expect to happen with such initiatives.


I would claim that most folks who have a reasonable understanding of fundamental computer architecture and assembly are those who actively learned it. Folks who have just used a compiler for everything, in my experience, rarely actually develop the understanding innately. Compilers still have this "magic" element associated with them.


It's not either or, it's a dance. You make a hypothesis in your head, verify it with a compiler. If result matches your hypothesis, move on. If not, investigate.

For me personally, I thought I kinda knew assembly until I started using https://godbolt.org when I realized that I really didn't know assembly. This site helped a lot of other people as well. Note that all it does is make the process of discovery faster. But it's a tool in the same category as a calculator.


Many vocations use computers, but should they all require learning about software, electronics etc?

If you don't need to "understand" the math, why risk opportunity cost learning it?

> I would draw an analogy with a compiler

staying "high level" is a good thing for some programmers. If every web dev had to dig into the low-level working of the browser, a lot less would get done.


I also interned at Wolfram in 2000. I spent the summer writing coverage tests, which meant getting to run Mathematica through gcov and seeing what lines weren't being hit by the test suite. Writing coverage tests for most things would have been incredibly boring, but this was anything but. There were lines of code that required me to spend two days in the Wolfram library reading up on branches of math I had never heard of before (wtf is a Gröbner basis?!). My favorite discovery was a line of code in the integer factoring function that I could only hit if I constructed a number by multiplying several Carmichael numbers IN A CERTAIN ORDER. If you like math and computer science, it's hard to imagine a better place to intern.

Sorry to hear about the rest of your undergrad experience. I was really fortunate that I never had a professor complain about me using Mathematica for everything. Even just typesetting math in Mathematica instead of LaTeX was a huge benefit for me.

One of my favorite classes was Computational Algebra taught by Dana Scott. He did the entire class in Mathematica. Each lecture was just him walking through a notebook and the problem sets were all about writing Mathematica code to solve interesting problems. I think I still have them somewhere...


> working with what I still think is one of, if not the, best language ever.

Really? Mathematica is amazing and uncontested for symbolic algebra, but writing anything more than a notebook/paper is a nightmare. In built stuff is good, but functions you have to define yourself instantly become an incomprehensible mess of parentheses. Leaving aside the fact that it's a propriety language and the many flaws of its eponymous founder.

I long for the day SymPy or similar gets good enough that we can dispense with it.


I would agree the language can be confusing but they language isn't exactly propriety. There are open source implementations of it like mathics.net


To add another point, my A-Level teacher always said anything the calculator can do, you can do, and just as fast.

And then she chose a question from the book, had one of us start typing, and she started at the board solving the same thing.

She finished first. Not by much, and obviously the calculator is the faster choice more often, but she finished first.


Reminds me of the story of Feynman vs the Abacus. [1]

Though, as a story, the conclusion he draws is pretty self-congratulatory and bothers me a bit. The substrate on which you implement an algorithm like arithmetic doesn't really speak to whether you "know numbers." It's like the high schooler thinking being very good at computing integrals makes you good at math.

[1] http://www.ee.ryerson.ca/~elf/abacus/feynman.html


Being powerful, ie good in something, is a function of Work over Time, so if you are good without much effort, that implies some sort of talent I think.


Well, she certainly proved that there's at least one problem whose solution she can sometimes produce faster unaided than at least one high school student with a calculator.


> She finished first. Not by much, and obviously the calculator is the faster choice more often, but she finished first.

This is more of an interface problem I believe.


It's a low level magic trick: she chose the problem. Even supposing she didn't have the answer memorized, she knew exactly what to min/max to favor herself in the challenge.

(A real magician would also work to make sure that the kids thought they chose the problem, but the choice had already been primed for them.)


That says more about the book than about calculators. :)


That's going to depend heavily on the problem in question. Multiplying large numbers? Probably. Computing the fifth root of pi to eight decimal places? Probably not so much. (Maybe there's some quick way to do that second one mentally, substitute something different if so.)


sounds like she proved that she could do it just as fast as a calculator. good thing i wasn't in that class. i would have proved her wrong: i couldn't out compute the calculator.

then again, they wouldn't have let me in that class in the first place.


> my A-Level teacher always said anything the calculator can do, you can do, and just as fast.

This is obviously false, and I don't really understand the point of saying this.

Plus, the benefit of using a computer is that computation is effortless, letting you use more of your brainpower on actually interesting problems rather than something that is easily automated.


> But, it would have been nice to use them in some contexts and not just deny their existence.

My high school classes in Victoria, Australia, had tests and exams both with and without TI CAS (graphic calculator that also does algebra and calculus with a pretty screen) and I agree that it's pretty nice. Interestingly, I think our curricula were based on the UK's originally.

I think it's similar to learning all the simpler math. You're taught to add large numbers on paper but in the end doing it with a calculator saves you a lot of time and effort. Learning to use a CAS or Mathematica or the like seems essential if you're going to be working in a field that uses calculus for practical things like engineering, medicine or finance.


I use alpha quite frequently. I have these big formulae that need differentiating/integrating (non-linear solvers, jacobian matrices, etc.) for a paper I'm looking to publish. Now, the algebra kind of gets seriously tedious and error prone. Here alpha comes to the rescue. I don't have the budget for a complete mathematica package. So, in the end I take the results and use scipy/numpy to write the code to use them in numerical computation.

So, to that end I agree here. They are absolutely wonderful tools if you know what you're applying them for. However, they can be pretty bad if you're trying to use them to cheat (just like everything else).


> ... why don't you use spoons instead?

reminds me of one of the arguments against minimum wage hike theory: if raising the minimum wage makes workers better off and has absolutely no drawbacks for anyone, why raise it to a mere $15/hour? why don't we do some real good and crank it all the way up to $100/hour?


And why not eliminate it, if it's bad? That way we can pay people to dig with spoons instead of expensive backhoes.

Seriously, though, to answer the question: there's a point where a raise in the minimum wage will stop being helpful, and everyone who supports the minimum wage understands this.


> ... why not eliminate it, if it's bad?

Did I say that the minimum wage was bad?

I view the minimum wage as something with both advantages and disadvantages. Depending on its level, different workers and businesses are helped or hurt.

If $15 is helpful, then $20 must be more helpful, except to the workers whose hours are cut back, etc. At each point along the scale some workers are helped but others are hurt.

If everyone who supports the minimum wage thinks that, beyond a certain level, it is no longer "helpful", what is the official, universally accepted definition of "helpful"?

How many workers can lose their jobs vs how many experience an improved standard of living?


You may cheat all you want on your homework, but if calculus tests are like they were when I was in college, taking your exam with no notes or calculator, if you've only cut and pasted from Wolfram Alpha all semester is going to lead you to getting an F in the course.


This - I think part of the issue is to stop homework from becoming busywork for grades. If everyone has the understanding that this is supposed to nail down and self-test understanding of material taught in class, then there's no reason students should "cheat". To this end, I like the approach of mentioning repeatedly that homeworks are for the students' own good, then give a test early on to make students quickly understand they'll do badly without practice, and then announce after the test that you'll drop the scores, but just DO YOUR DAMN HOMEWORK! :-)


Don't announce it, or it will work only for one or two years.


Agreed. This headline makes it sound like they're cheating in exams, but this is take-home assignments we're talking about. If wolfram alpha wasn't available, they'd be asking friends or posting in maths forums. Wolfram alpha is simply more efficient at this job but all take-home assignments have this problem inherently.

Students just need to be aware that they should learn from solutions provided by wolfram alpha (there's nothing wrong with that), otherwise they're going to flunk the closed-book exams anyway.


Exactly: Why is homework considered something that can be "cheated" on anyway? The whole purpose is to get practice and learn the material.

Just make the tests challenging-the ones who actually did their homework will pass. The ones who didn't won't.

Wolfram Alpha helped me many times-sometimes you're simply stuck on a problem and no one is there to help. But like anything, you can use it to cheat and not do work, or you can use it to help you learn more.


I'm well aware of this. I'm going from the Ontario high school curriculum where homework is 70% of the grade to University of Waterloo math faculty where assignments are 10% and the midterm and final combine for the other 90%. Wolfram Alpha should be used for nothing other than a sanity check afterwards.


Too right; it may help being able to just provide the answers, but homework made up a small percentage compared to the actual exams, which of course were done without any notes and were simple enough not to need a calculator.

It may be difficult trying to teach these kids how to perform the math itself when they can show the homework, but ultimately they are failing themselves when it comes down to needing a fundamental understanding on the process itself.

This reminds me more of teachers scoffing at calculators being a crutch. It comes down to the students' willingness to learn, not how to thwart cheating on homework.


> it may help being able to just provide the answers, but homework made up a small percentage compared to the actual exams...

One of the easiest ways to inflate grades without explicitly lowering standards is to decrease the proportion of the final grade that depends on the midterm and final. Cut out the midterm for "more instruction time" and leave a 20% final. Homework is now worth an absurd 80% of the grade... that's no good. Throw in a project, participation, or auto-graded "e-labs" with infinite attempts. Suddenly it's really difficult to get anything less than a B unless you really just don't give a crap. But also kinda difficult to actually learn because So. Much. Busy. Work. And, on top of everything else, we're punishing the one student who's not cheating.

Any time I see a lower division syllabus where closed-book exams are worth less than 40% of the grade, I'm instantly suspicious.

> but ultimately they are failing themselves when it comes down to needing a fundamental understanding on the process itself.

Yup.

> It comes down to the students' willingness to learn, not how to thwart cheating on homework.

In fact, you can leverage the calculator or WA to teach the material at greater depth. No longer need lots of practice with trig identities or u substitution? Great. Maybe we can write a few proofs instead, or use the time to work through a large case study.


My impression is that the article is about highschool students, where homework tends to be on the order of 75% of your grade simply because you get so much of it.

If someone were really serious about abusing the service they could pull out a C in the class even with hard failures of the tests.


> where homework tends to be on the order of 75% of your grade

I think I found the problem.


It's hard to balance when you have an hour of homework assigned every day, but only get tested about once a month. So you have 20 hours of homework vs. a 30 minute test.

The difference in magnitude ends up being too much. Either each homework assignment is worth almost nothing or they outweigh tests.


I was a math major in college only a few years ago, and lower-level courses tended to concentrate most of the grade in exams for this reason. The exams were always pencil-and-paper only, with no resources of any kind allowed. In upper-level classes, the trend often shifted, with most or even all of the grade coming from problem sets. Some classes didn't even have exams. But this is because the material had become entirely proof-based. Referencing books was not only allowed but encouraged.


But it might seem like good time management to have Wolfram Alpha do your homework sometimes. That means it's going to take longer for a teacher to notice that you're not getting a concept well, and that means a lot more painful surprise-F's on tests.


That could be painful for the teacher too. If kids don't do well on tests, then teachers can fail their evaluations. It seems like it would be an incentive to use the flipped model where exercises are worked though in class and students learn lessons at night via video and reading.


> That means it's going to take longer for a teacher to notice that you're not getting a concept well, and that means a lot more painful surprise-F's on tests.

I'm not sure how much to read into this, so forgive me if I'm misunderstanding the tone; but calling an F after not doing any of the work oneself a "surprise F" seems a lot like calling it a "surprise loss" when one hasn't shown up for any of the team practices. (If you were just observing that students are surprised by such F's, no matter how often it's pointed out to them, and no matter how often they've met such outcomes before, then I am forced reluctantly to agree.)


I learned this pretty quickly - I used Mathematica a bit too much for help on calculus homework and wound up paying dearly during the first few exams.


Grumble grumble back in my day grumble grumble


Grumble grumble integration by exhaustion grumble grumble Napier bones grumble grumble


Can confirm.

Source: did this, got a C in freshman calc.


It is not cheating. It is leveling the playing field.

Some students will inevitably have access to tutors or parents who can give them the full answer. With wolfram|Alpha all students have access to this.

What this really show is that homework isn't the best tool for gauging student prowess.


> It is not cheating. It is leveling the playing field.

This is not even remotely true. I think you seriously under-estimate the value of parents/tutors.

Students with tutors/parents will not just get the right answer. They will also be led to the right answer, over and over, and told what they need to practice in order to improve. Explanations will be tailored to their learning style. So they get direct help on assignments, but they also get regular indications about how to improve their performance in scenarios where they don't have that help.

Wolfram Alpha just gives the answer.

If homework in 90% of the grade, maybe it levels the playing field for letter grades in the course course... until a few years down the road when the student who didn't actually learn the material is screwed and has no way of catching up.

If homework is a more reasonable 20-50% of the grade, then the reckoning comes at midterm/final time.

> What this really show is that homework isn't the best tool for gauging student prowess

Well, that's certainly true.


While W|A doesn't come close to making the field level, I think there's an argument to be made that it makes the field more level than it was. The perfect being the enemy of the good...


That's true. Especially the answers that tell you the process for deriving the solution instead of just giving the solution. Then you can see where you got stuck/went astray.


Homework's largely supposed to be about reinforcing the lesson for math courses. For other courses it's largely about reading which shouldn't be done in class. Tests/essays are for assessment.


Other countries don't put as much emphasis on hw and seem to do just fine. In some of them hw isn't even graded, it's for you to practice. My high school for example gave out a book with the problem solutions. For some of them, it was only the final result, for some of them it was the whole process. Hw didn't affect your grade at all and it was up to you to decide if you want to do the hw or not.

The amount of hw students get in the us is astounding. It's not always productive either (more often that not it's not). And it seems like there's no pushback against it.


Then why should homework be factored into grades at all?


1. Homework should absolutely be graded to provide students with feedback, assess learning progress, and tailor instruction to strengths/weaknesses of students in the current iteration of the course. Once all the work goes into grading, it's hard for teachers to throw out that data when assigning grades.

2. It mimics the real world where sitting your butt in a chair and turning in useless reports is valued -- perhaps not as much as actual outcomes, but still valued. Perhaps that's stupid, but in most jobs it's a reality. IMO this isn't a great motivator, but it's one a lot of teachers cite.

3. Helps pad the grades of students who are terrible test takers / have anxiety issues.

4. Students are... students. They don't always know what's best for themselves. Making homework worth a portion of the grade communicates to students that they need to practice. Patronizing? Maybe. But also a reality.

5. It's the easiest way to inflate grades without explicitly lowering standards.


Incentive to do it.


And propping up the grades of compliant students.


It shouldn't be, and isn't in many countries.


I remember letting wolfram|Alpha solve some problems, then reading the steps it took to get there. They weren't always the most "human" approaches, but there was something to learn from it as well.

Poor-man's (student) tutor.


If you know how to formulate the problem well (which is typically half the problem anyway), Mathematica is great for providing closed-form solutions, if there is one. In CS graduate school, it would provide general closed-form solutions that I'd never have thought of. Most everyone else brute-forced a solution numerically or simplified the problem to specific cases, which is fine but you get so much more insight from the general analytical solution, if one exists.


Tutors can explain in very clear terms if you don't understand the problem. Or they can provide an analogy if the explanation is not understood. I know of at least 3 schools in my state who provide dedicated math sessions in the libraries with student tutors.

A machine usually doesn't help if you don't understand the concepts. If your stuck on a problem and one step along the way was off I can see how Wolfram alpha can help.

I would bet the vast majority of students are using Wolfram Alpha to complete homework.


In my two semesters of college math, I've gathered that the faculty has something of a phobia to, if not wolfram in particular, students' access to help outside of the department.

Homework problems were oftentimes deliberately difficult, and attending tutoring/office hours was almost certainly necessary for most students to master the material.

I got my hands on an instructor's manual of the textbook, and it was a tremendous boon for my mastery of the topics. By having immediate access to the solutions of difficult problems, I was able to comprehend how to approach problems of that type, and therefore could solve more difficult but similar examples in the future. The cycle of attempt/fail/check-solution/repeat was really effective. Waiting for the instructor's office hours or the availability of tutors would have made this process, if not impossible, incredibly inefficient.

Do any math educators have any insight to this? Is this math department clinging to an antiquated curriculum in which faculty is something of a gate-keeper to knowledge? Is there a good reason for their distaste for 'going around' them?


> the faculty has something of a phobia to ... students' access to help outside of the department.

Math professor here. I am most certainly happy if my students get help outside the department, and I think my attitude is quite typical.

We can be a little bit wary of some kinds of help. Too much math teaching consists of "If you see a problem that looks exactly like X, here are the steps you should memorize to solve it."

But we don't care per se if you can solve problems of the shape X, Y, or Z. We want you to develop your skills to the point that all of these lie naturally within your skill set, that you could do them even if you've never seen one exactly like that before. As such, some kinds of tutoring can be counterproductive.

But most aren't. In my opinion your professors' attitude was quite foolish. Kudos to you for seizing the initiative and figuring out for yourself how to best learn the material.


> Too much math teaching consists of "If you see a problem that looks exactly like X, here are the steps you should memorize to solve it."

A significant amount of math testing is basically checking if you've memorized some theorem (and then can solve it), so is that surprising?


> is that surprising?

No. It's a difficult problem to mitigate.

There are always going to be some students who want to learn the minimum possible to pass the exam, and who will never work with the material again. Although I do my best to be respectful of such students (indeed, in some circumstances this can be a perfectly rational point of view), my pedagogy is aimed at the student who sees my class as something more than a meaningless hoop.


I'm with my sibling commenter impendia (https://news.ycombinator.com/item?id=14719497), and am also a math professor.

My way of saying it is that it's great if you get help from any source you can, but it's way too easy to get something that seems helpful (because it makes short-term goals easier to achieve) while being damaging in the long run. It's fantastic if you had the personal discipline to use a solution manual to deepen your understanding, but there are lots of students who will use the solution manual as a copybook—the material in it going, as the saying goes, from page to pen without passing through brain on the way. Since I, as a teacher, don't have a ready way at the beginning of the semester to distinguish the students with your discipline from those without, I'm just going to discourage everyone from using solution manuals—but, as long as your homework solutions aren't copied from it, I don't care much if you go against that advice.


I fail to see the problem here. In a curriculum that takes the joy out of math and attempts to reduce the student's mind to a calculation machines, students employed a calculation machine.

I understand the arguement that math is important, but the way it's taught in America, even at the AP level, is criminal. It boils down to "if you see this pattern, apply these steps" without any effort at going beyond. We teach "how" but not "why", which I think is a common refrain when talking about the American education system, or any test driven education system. Math is a means, not an end.


This drives me crazy in my mathematics courses. Sometimes I just do not recognize the pattern, and tools like Mathway or Wolfram Alpha can say "right here dummy" and give me the first step. It is not always about cheating, but seeing the steps for solving the problem logically outlined. I often back myself into a corner working out a difficult problem and need a lifeline.


Playing the world's saddest song on the world's smallest violin there Denise Garcia. How about maybe updating your curriculum for the 3rd millennium. Nah, that would be to much work, complaining in wired is much more productive.

The best part is that people in the school administration might do something like blacklisting Wolfram Alpha on computers in the school library and feel like they've dealt successfully with the problem.


Have you tried teaching calculus where everyone gets to use Wolfram Alpha and/or Mathematica? I did for three years.

I discovered that the students really don't understand the concepts and more importantly don't want to. They just want to mimic problem types. They don't want to understand the why. Plug and chug is their true desire.

Using the computer exposes right away if a student understands the concepts. You can't get started on a problem if you don't know what to tell the computer to do or why. I went back to the old paradigm. A few students got it but most never understood.


> I discovered that the students really don't understand the concepts and more importantly don't want to.

The problem is that you are forcing people who don't care about it to learn it. Ofc they'll take shortcuts.

> Plug and chug is their true desire.

I mean, most tests are very plug and chug so it's no surprise.

Also note that it might be unreasonable to expect students who never used these tools to use them for the right purpose. Mathematica is a complex tool but I can't imagine that much class time is dedicated to the tool itself.


Actually it's not a complicated tool with the free form input, integration with Wolfam Alpha, and templates for the commands used. Also during every test I would correct any command that didn't execute properly.

Your point originally was that it requires too much work on the part of teachers/administration to update the curriculum for 21st century. I'm pointing out that this view while true in some cases may not be true in the present situation. It's easy to ascribe to laziness why things are done in education the way they are to someone not involved in teaching.


Sounds like Mathematica is the way to go then? Because it turned out students who could do the problems. The 'old paradigm' leaves most behind, which serves no one.


Students could not do the computer based problems. Save for a few. More students get through with the old paradigm.

Old paradigm problem. Here is f, using the definition of derivative find f'.

New paradigm problem. Here is f. Here is a Mathematica function defining f(x+h) - f(x) divided by h. Graph this function for appropriate values of x and h to show whether or not f is differentiable at 2.

The old paradigm problem you just proceed as in all the examples. In the new problem they get intimidated because it involves using a computer and not a graphing calculator. They don't understand that you keep x fixed at 2 and vary h around 0. When they start the problem they give nonsensical input to the computer, get nonsensical output and promptly blame the stupid program.


To find out whether f is differentiable at some point, I'd naively draw f around 2 and look for obvious discontinuities. Why encrypt the problem in such a way? Alternatively, finding f' and explaining if it is undefined at the point in question would work even more reliable - for corner cases the visualization of a graph might hide the fact, i.e. if the resolution is too poor.


Regarding the first sentence your wrote. That's how you solve the problem. Graph the difference quotient for x=2 and h in [-0.1, 0.1] or some such suitably small region. It's quite easy to do in the computer. One just needs to know that this is what you need to do. I did not encrypt the problem. It's quite straightforward provided one knows the concepts.

If students are allowed to use Mathematica during a test then asking them to find f' using the definition of f is not helpful in determining if they understand the concepts. They, for all reasonable problems, just need to execute a single command:

Limit[ (f[x+h]-f[x])/h, h->0]

This doesn't really test their understanding. If one is going to allow students to use Mathematica on a test then the problems need to be adjusted.


In the first sentence, I didn't talk about the difference quotient, just plain f.


I see. Then I don't understand what point you were trying to make. There are lots of different ways of testing understanding of a concept. I presented one way of doing so with students having access to a computer.


I was saying you presented a weak argument. Graphing the differentialquotient multiple times is more trouble than it's worth.


The point of the problem I gave was not to find the derivative or even to know if a function is differentiable. The point is to test their understanding of the definition of derivative. It's a good problem in that is tests whether a student understands that in the two variable expression we call the difference quotient one of the variables is fixed for purposes of the definition. The students need to know that it is h that varies and not x. This is an unusual occurrence and requires some getting used to.


Thanks for the clarification


Wolfram Alpha isn't the issue. The issue is that we're grading students on an arbitrary and meaningless metric, instead of the end result.

As an employer, if I hire you to do a task, let's say build a toaster, I couldn't care less about how you achieved that - as long as I've got my toaster and it can help me grow my toast-making business I'm happy. Education should work the same.


Plug and chug with Wolfram Alpha is very similar to plug and chug Stack Exchange answers in your code base.


Math and programming are a bit different though - to my knowledge in math you're not dealing with malicious users trying to give you incorrect input in hopes of exploiting bugs in your algorithm.

Edit: but to be honest if you needed to give the result Z for a set of inputs X and Y without worrying about invalid/malicious input, then a Stack Exchange copy/paste is totally fine by me.


> Math and programming are a bit different though - to my knowledge in math you're not dealing with malicious users trying to give you incorrect input in hopes of exploiting bugs in your algorithm.

But in fact you are! Actually it's even worse; in programming, you are just up against the ingenuity of actual human (or at least humanly programmed) users, whereas, in mathematics, you are up against the whole of 'reality' (in a Platonic sense). A whole tower of consequences will be built upon one mathematician's work, and, even if no human can spot the flaw in that work, if it is there then 'reality' will find it, and somewhere in the chain of consequences there will be an error that will bring the whole thing crashing down.


The thing is, if no humans can spot the flaw, then what difference does it make if they cheated in school? Either way they'll do flawed calculations if we assume your logic.

However, I wasn't really talking about people wanting to become actual mathematicians - those probably wouldn't use Wolfram just because they actually love crunching those numbers manually. The people who we're talking about here just see math as a roadblock they need to get through to do whatever they really want to do (programming, etc), and in this case this "cheating" is totally fine by me.


> The thing is, if no humans can spot the flaw, then what difference does it make if they cheated in school? Either way they'll do flawed calculations if we assume your logic.

I don't think that my logic allows us to conclude that. The idea of mathematics is that it is possible for humans to create and apply a system whose correct application makes the genesis of errors, howsoever subtle or undetectable, impossible.

Given this, and the likelihood that a mathematical error (of the conceptual type "any convergent sequences of continuous functions converges to a continuous function", rather than of the computational type "1 + 1 = 1") will not be found, it is especially important that practitioners of mathematics know how to apply their tools correctly, which they probably will not have learned by cheating in school; and, if they are able to apply those tools correctly, then they will not create errors.

(I grant that the weasel word 'correct' and its derivatives risks making this argument circular. I grant that human mathematicians collectively make an awful lot of errors, although I hope that we make fewer professional errors than many other professionals.)

> However, I wasn't really talking about people wanting to become actual mathematicians - those probably wouldn't use Wolfram just because they actually love crunching those numbers manually.

This comment seems to suggest to me the source of our disagreement in the first paragraph. As a mathematician, I don't crunch numbers professionally, and, when I have to do so outside of my profession, don't love crunching them manually. I suspect most mathematicians are in the same boat.


I definitely agree that someone using tools should know how to use them right - however in this case maybe the curriculum should be tweaked to point out mistakes when using a tool? Ie, instead of assuming that someone would do it by hand, assume they'd do everything they can to cheat their way out of doing the work and trick them as much as possible so the tools would only work if you use them right. Instead of focusing on teaching them how to do it by hand (which they would never do in the real-world given the time constraints), teach them which tools to use and how to use them properly.

My point about crunching numbers manually or not was more about the fact that a lot of people taking those math tests do so because it's required by X policy and not because they are genuinely interested in math, and IMO that's fine - not everyone aims for a job that involves mission-critical math. Some for example might just want to develop games, where a screw-up could at worst result in a graphical glitch.


> I definitely agree that someone using tools should know how to use them right - however in this case maybe the curriculum should be tweaked to point out mistakes when using a tool?

I totally agree! I structure my classes to point out both common classes of mistakes that everyone makes, and uncommon classes of mistakes that are subtle and difficult to catch. I even have a special way of presenting it (I switch to a colour I only use for discussing mistakes).

Students hate it. One of the two comments that I consistently get on my evaluations is "stop telling us how not to do it." (The other is that my tests are too hard, precisely because they don't involve just rote computations.) I've been told by classroom observers that many students literally ignore it, ceasing to take notes while I discuss mistakes and resuming only afterwards.

I keep doing it anyway, and I make a point of why I'm doing it, but it can't all be me; some of the onus has to be on the students to be willing to think about understanding failure modes as being as important as success.


Or passing an exam thats easy/cheap for the teachers/school to grade for that matter.


SE can show you what specific incantation format the API requires, but math is a bit different.


> “Stephen Wolfram, the mind behind Wolfram|Alpha, can’t do long division...”

What the heck kind of article is this...? I can’t read it seriously.


Even if this was true, what's the problem? He built a successful business that many people find useful, and that's good enough for me.


I respect the abilities and accomplishments of Wolfram. My problem with the statement I quoted from the article is that I suspect it is bullshit; I think Stephen can do long division, and much much more...


"I never learned long division, and look at me." -- Stephen Wolfram

http://www.stephenwolfram.com/media/meet-inventor-who-makes-...


I agree. I got hung up on this: "Wolfram|Alpha uses natural language processing technology, part of the AI family, to provide students with an academic shortcut"

WA uses NLP for some things, but not for solving equations.


Am I the only one here that would program their TI-83 in high school to run equations for me so I didn't have to solve them by hand?

I wasn't even that dev-savvy (and still am not), but it was super easy and was a huge time-saver on timed tests.

I obviously had to learn the concept in order to program the equation, but once I did - why should I have to go through everything manually every single time?


I had a math teacher who agreed. Me and another student sat in the back and we programmed the calculator to do whatever she taught us that day, which she was reviewing it with the other students.

Since we developed the tools, we obviously had an understanding of the concepts and we allowed to use them for homework, and tests and such.

And it kept us from being bored and disruptive, so it worked out for everyone.


I had a high school math teacher who actually had a policy that you could use any program on your calculator you wrote yourself, including on tests.


We had the calculators force reset before every test.


A friend of mine had a graphing CASIO calculator (we sat our exams in 1999) and he figured out a way of making text files persist across resets. This was the very epitome of unfair competitive advantage, but I figured that if he put in the effort to find the work-around, he deserved to benefit from it.


I felt extremely fortunate that Wolfram Alpha was available when I took calculus in college during summer '09. Wolfram Alpha was amazingly convenient and helpful in getting through tricky problem sets when there was a queue at the tutoring center. It didn't really allow me to cheat at the problem sets because it didn't give the intermediate steps, just the final answer. And of course it's useless on exams. At least regarding calculus, it seems to me almost 100% a helpful learning tool and 0% a contributor to cheating problems.


There's a simple solution to this, which also has the advantage of creating better educational outcomes: have students do the reading at home, and the work in class.

Dry lectures aren't the best way to learn something: hand-on work is. Listening to a teacher read a powerpoint is a waste of time, because you could just read (or watch a video) on your own.

But having an expert on the material standing over your shoulder, helping you through tough spots while you're working through it? That's valuable.


The problem is that not all math problems are worth doing. For instance, when I took Calculus 2/3, some of the homework problems weren't usually hard per se, they were just arduously long. And honestly not really worth doing by hand. If I misplaced a single number and didn't realize it immediately, I would spend the next 10 minutes solving an integral that had no use, plugging it into Webassign for it to tell me I'm wrong, backtrack until I find my issue, and solve the corrected integral. I would usually do this at least once per problem (but usually more like two or three times).

So by the end of the course, I would just figure out the integral (or rather, the triple integral), plug it into Wolfram, plug the outputted answer into Webassign, and if the answer was wrong I'd backtrack until eventually the answer that Wolfram outputs is correct. At which point I'd solve the integral by hand.

---

There was one time when the homework problem asked me to find the intercept of this obscenely complex trigonometric equation. I tried for a solid half hour and couldn't solve it, so I went to the math help room to ask the TAs for help. I ended up stumping three of the TAs. A few days later, I went back to see if they had figured it out. Turns out they hadn't. They said we should just use Wolfram to get the answer.


Recalling the first-year university math analysis course, our professor himself told us about Wolfram Alpha. Which followed by a whole year of equations and integrals the site could solve only numerically.

The high enough level of the tasks beats any cheating attempts.


Or they could cheat by doing their homework, then checking the results on Wolfram Alpha to make sure they always get 100%.


I think we should encourage that. If students can catch their mistakes early and learn from them, I think they should.

What we should discourage is students using Wolfram Alpha to just get the answers without any initial work. That robs students of the chance to learn and would definitely be cheating.


I find it an awesome tool to advance knowledge. I recently had to take calc again after 10 years, and it helped me get back up to speed super fast. I would do the work, check the answer in the back of the book, and if I got it wrong, I would use Wolfram Alpha to do step by step to see where I got it wrong.

I can see some people saying that is cheating, but it is a learning tool to me. The class was 90% tests/ quizes. HW was a participation grade. Meaning a tool like that gives no one an advantage. Unless they are using it during the test.


Teach and test for understanding why and how the answer comes to be, not what the answer is. Maths education, in my experience, relies too much on testing of the arithmetic rather than the logic. This just results in plug and chug because it's lazy testing and students apply lazy solutions.

Good examiners will ask students questions that make them think about why and how, not what the answer is. These often don't even need algebra or arithmetic because it assumes students know the equations and instead goes a level deeper to test understanding of why those equations are the way they are.

I had the gamut in uni. I found that engineering more often successfully tested understanding as I describe above. Some didn't; my thermo was a case of 'learn how the equations work, then read the right graphs and go' but particularly some of my fluid and aerospace courses were great at asking questions that really tested deep understanding of the theories.

One good example of this that I came across more recently is some of the edX courses that used to exist featuring Walter Lewin (before his sexual harassment came to light). He was very successfully able to question his students on the why and how, not just the what. This actually proved even more important in the MOOC environment, where you can't as tightly control the environment in which students undertake examination.

It's hard, it requires good lecturers really spending a bit of time devising questions as well as supporting their tutors as they teach the students, but it's possible.


My dad said to me that knowledge isn't what you remember, it's what you can remember or look up relatively quickly.


There's a significant point a lot of people are missing. In e.g. physics and chemistry, the math is not the point. Using e.g. Wolfram Alpha is great if it allows students to solve larger and more complex problems, without sitting there solving integrals over and over (for instance).


A few issues:

1) applications rest on the shoulders of giants, it isn't efficient to learn too much more than you need to. If you only need to "understand" at a high level, you should just use tools that "do it for you" at lower levels - It's just a shame there are not better FLOSS competitors to Mathematica.

2) It should be clear what the "understanding dependencies" are - i.e. when some knowledge of the foundations can inform the higher levels, and when they don't. If I understand what 'sorted' means, I don't need to know the details of any specific sorting implementation to use it.

3) The way mathematics is split up into a million small, set-theoretically-abstract lemmas etc makes it so much harder to understand. It makes even familiar concepts hard.


The are many things which are difficult to measure directly, and thus people tend to use indirect measurements instead; school-learned math is full of such things. Consider a fairly simple example: "What is the factorial of a number?"

Determining whether a student knows this is going to take a bit of work (particularly if they lack a formal way to specify it e.g. Pi notation or even a programming language), but we might approximate it by instead asking "What is the factorial of 5?" Now obviously this is not a perfect measure of what we're actually looking for even in the absence of calculators (e.g. someone might memorize 5! = 120), but it's easy to evaluate and is probably a decent proxy in the absence of calculators.


They may let you cheat (just like SparkNotes I guess), but these tools also let people like me who haven't had advanced calculus survive when we otherwise wouldn't. If you pay for the WA service (I don't remember what it's called), you can type complex math in and it will break down how to solve it out. This was invaluable to me in some CS classes where I knew how to implement the algorithms once the math had been detangled, but the mathematical notation was so opaque to me at the time that I would've had no hope of figuring it out.

Obviously, I learned a lot from having it broken down like that so I wouldn't be as dependent on the tool now, but at the time, it was a huge learning aid.


Anyone who thinks using wolfram alpha for math is cheating has never tried using it for non-trivial calculations. Yeah, it will save you the time of looking up an integral from a table, but it's worthless if the integral isn't in a standard table. It will save you time on looking up the equation for the area of a dodecagon, but it won't prove that equation. It will save you time figuring out the units of the final answer, but it won't tell you if those units are logical. Wolfram alpha only saves people from the tedious busywork of basic arithmetic and algebra, the user still needs to do all the math.


Since when has wolfram alpha an AI? I always thought it was simply calls to Mathematica.

Maybe when you ask general questions it uses an AI, but with functions and maths I think it only uses Mathematica.


AI gets clicks in 2017: a story about students giving an artificial intelligence their homework to finish sells better as a story than talking about students punching formulas into a web-based calculator.


Well, maybe it can even understand mathy questions given in natural language.


Using a search engine is not cheating. There is nothing wrong in making use of all available resources in assignments and this is just another resource at their disposal. My $0.02


This isn't new... I had a TI-89 in college that could solve for calculus equations. As someone else pointed out... we couldn't use it on tests though.


Had a TI-92+ myself. Best money I ever spent


If a computer can easily answer the question, it is not worth asking it to a human (this also applies in the real sense that it won't be valued by the marketplace and translate to higher wages).

How about challenging students with problems that are difficult even when modern technology is used to its fullest extend? Or teaching students how to build tools that solve their homework?


> teaching students how to build tools that solve their homework?

This would give them not only a thorough understanding of the problem they are solving, but teach them a very valuable life skill of finding ways to automate your work, and finding ways to package your expertise into software that can be run by a person without your expertise.

This would amazing!


Basically, in most college courses the problems are either 1) easy enough that implementing a program to solve them isn't super insightful, or 2) difficult enough that complete automation would mean "do research".

I'll limit myself to Math since that's the topic of this article:

Calculus sequence: CS 1 is not a pre-req. And there's not enough time to teach both CS 1 and Calc 1/2/3 in a single course. "Implement it" works well for derivatives but not integrals. You're not gonna teach Risch, and implementing integration tricks isn't particularly insightful IMO. The cost/benefit ratio explodes in Calc 3, and the physical intuitions become as important as than the calculations.

Everything past that is proof-based and now you're kind of in "your homework is an open research problem in combining NLP with theorem proving" territory. Maybe with the exception of particularly bad Linear Algebra courses and a bit of the early stuff in Algebra.

From a "pragmatic skills" perspective, this approach is still highly suspect. E.g. no one's going to invent their way to Risch by implementing integration tricks.

Point is, every field teaches useful life skills / knowledge, and programming gets in the way as often or more often than it helps.


I work training and cheating professionals. I used to be a project in chief (the guy that hired, fired and evaluated new personal).

I encourage to my students to do the, so called, cheat, because its what the people in the real world do, people don't need to memorize a lot of topics and people use calculator, computer and excel. Sheesh with the old and rust model of learning.


I am deeply grateful that something like WolframAlpha existed when I was taking calculus classes. It helped me soo much. I could make up integrals, solve them and see if I was right, just like that. It even has a function to show step by step solutions of integrals and series. All extremely useful tools to deepen understanding, not to cheat.


Students cheat because our society values diplomas more than knowledge.

It's a true problem, but the consequences are not visible yet.


SymPy is in the same ballpark as Wolfram for calculus, and not rate-limited: http://www.sympy.org/en/index.html

It doesn't include the databases of structured information or the NLP-like interface though.


The "problem" is that the students have moved to https://www.computerbasedmath.org/ but the curriculum hasn't (yet). Just fix the curriculum.


Its not just high school students. I had some classmates in grad school using it to breeze through feedback theory homework. The tool is as powerful as students are lazy.


Today I learned you can cheat on homework. So apparently giving up and having the wrong answers (or no answers at all) is what professors would prefer. Cool :)


> So apparently giving up and having the wrong answers (or no answers at all) is what professors would prefer. Cool :)

It's much easier to correct known unknowns than unknown unknowns.

The best advice I was ever given as a student was "Pretend like you'll get an automatic A on your report card and treat your grades as a feedback mechanism for figuring out what you do and do not understand. Just focus on learning." As a simple corollary, cheating is kinda silly.


>It's much easier to correct known unknowns than unknown unknowns.

This assumes you'll get any kind of personalized feedback and instruction in a lower division maths course. Good luck with that at a public university.


I have a Master's degree in physics and Wolphram|Alpha would not have been out of place in the acknowledgements section of my theses.


“Difficult to trace”? If it generates the same output for any given input, then you can just compare the students’ submitted papers with it, no?


Sounds like Wolfram Alpha could be very helpful for anyone doing self-study, from math textbooks that don't put answers in the back.


"while a few were still using it at their jobs as engineers or quantitative analysts"

Er... which is a good thing, surely?


The big issue to me is things that matter take effort.

So many things (like math answers) are made so available with no effort.


we need to learn 'the basics' and survive without tools to some degree.


I found symbolab.com much better at this.. It also has much better user interface.


I think we should learn


So the test is rewarding students with the skills to perform well in real life, instead of only those who have submitted blindly to the outdated traditions taught by those who cannot do.

There is of cause a level where you cannot just copy solutions but those tend to be badly covered by "facts centered" written exams anyway so why not build your exams around the reality that modern tools exist and will be used, instead of testing as if the world had not changed since the teachers left school.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: