I've heard this concern a lot lately. It's understandable. But I think it's shortsighted.
Does using a calculator make you a cheater at math? No, you still needed to understand the concepts.
Does using ChatGPT make you a cheater at school? While ultimately that's up to the schools to decide, I would argue it shouldn't. Because you need to have enough understanding to ask the right question as well as to be able to spot what's wrong in the answer.
For example, I was helping my kid out with their Java homework and we we're both stuck for a good hour. Finally I loaded the question into ChatGPT.
The answer that came back helped us solve the problem. But we didn't just cut and paste. We looked at that solution and compared it to our own to find the problem.
I don't consider that cheating. Others may feel differently.
Ultimately, I think of this as augmented thinking. In the real world, we all use whatever tools we have at our disposal. If in the real world we have access to Google and calculators and now AI chatbots, why should we train and educate ourselves as if those don't exist?
I'd rather my kids learn to use every tool at their disposal to be as fast, efficient, and effective as possible. And unlike paying someone to do your homework, this is something everyone can do, not just those with discretionary income. So I really have no problem with this at all.
> Does using a calculator make you a cheater at math?
Yes, yes it does is you're testing basic arithmetic.
The difference between a calculator and ChatGPT is the scope of problems it solves for you.
If you could read your math problem aloud to your calculator and it could solve it showing each step along the way people would clearly see it as cheating. It can't it can only do simple arithmetic, so you still need to translate the requirements to to understanding, determine an algorithm and then perform it with the calc doing the lowest level operations.
ChatGPT does the equivalent of this (ironically for non-math only). There is no "higher level" work for a person to do. It does it all. It's only limitation right now is that it's still under development.
ChatGPT can't do math, but let's say it fixes it's math bug soon. You can't come up with a type of high school or undergrad math problem it can't do. I can generate python code, it will be able to generate a proof.
And math is the hard one. Something like "write a few paragraphs discussing the initiating factors for World War I will be trivial.
If someone is going to claim ChatGPT (v2 or v3) doesn't completely upend education, then give an example of a type of question that it will be inherently unable to solve for you that people will still need to do.
> If you could read your math problem aloud to your calculator and it could solve it showing each step along the way people would clearly see it as cheating.
> If someone is going to claim ChatGPT (v2 or v3) doesn't completely upend education, then give an example of a type of question that it will be inherently unable to solve for you that people will still need to do.
ChatGPT is fairly superficial. The challenge will be to transition education from superficial regurgiation to deeper understanding.
Or to put it simply, writting an essay/code/etc isn't good enough, you know need to do it better than ChatGPT.
Can you gain deeper understanding without first gaining superficial understanding, though? And if not and our current method of imparting superficial knowledge breaks down, wouldn't there be a pipeline problem because much fewer students would get to the point where you can start teaching them deep understanding?
(Imagine a scenario in which no assessment is possible in any mathematics course below the level of differential geometry. Would "we just have to switch to teaching students advanced math instead" be a solution?)
Something along the lines of this is already a problem at US universities, as students chegg, stackexchange and collaborate their way through up-to-junior courses and then are so underprepared in senior-level ones that really 80% of a given class ought to be failed if this were politically feasible. At least though the current situation is more due to a lack of will than a lack of way to stop the cheating, so students are under some pressure to not make it too egriegious.
> an example of a type of question that it will be inherently unable to solve for you that people will still need to do.
Something like "tell me about your day so far" or "describe some important experiences in your life." Obviously, you can use ChatGPT to answer those questions, but they won't be true answers, since ChatGPT doesn't know you.
Of course, there is the issue of verifying those answers — probably the teacher won't be calling the student's parents to make sure it's accurate (:
As objective knowledge gets increasingly captured by external systems (search, maps, image generators, etc), subjective knowledge and personal experience remain out of its reach. I wonder if this could push us in a direction of valuing our personal life experiences more highly, as the other stuff becomes increasingly commoditized?
Speaking as someone who just took a take-home exam and used ChatGPT to complete it (documented, checked with the prof before, etc.) I may have some insight here. Also relevant: my PhD mentor is leading a synchronous/asynchronous discussion among the local academia folks regarding this specific concern.
The short takeaway is that it’s a huge problem when it comes to test taking. Untimed power tests are the gold standard for assessing student knowledge. The epitome of this kind of test (in any discipline) is a take home exam with extensive short/long answer questions. The test is open-internet, open notes, open everything, except for collaboration. It has been proven for a long time that this is the best way to assess whether or not a student has learned the material. The worst alternative is a stressful in-person exam that is closed-everything. This alternative produces many false negatives. Anxiety, slow test taking, etc. cause students who know the material to perform poorly.
The issue is detecting cheating. It’s very easy for teachers to administer said worst alternative. An untimed power test, on the other hand, is extremely labor-intensive to produce/mark. Also, cheating is detected by comparing answers between students. This adds another level of complexity compared to multiple choice or the simpler short-answer questions that are delivered during a timed exam.
ChatGPT introduces a layer when it comes to detecting cheating that is currently looking like it’s going to hurt students. On its current track, it’s going to make untimed power tests much harder to produce and administer. They’re already so difficult to create that most professors just opt for the simpler timed exam.
In an entry-level research-oriented graduate class, the point is to learn the foundational material so that you can progress to more abstract levels. ChatGPT is making it much harder to assess these classes. As far as testing whether someone can solve a more technically-oriented problem that would be seen in industry, I’m with your interpretation.
I think we need to ask ourselves what we're testing and why.
An untimed at-home test has very little value to me, because it's unlike anything you would ever be asked to do for work. And let's not forget the real reason we have schools is to prepare us for work.
Never in my entire career have I been sent home to work on something without a deadline. It's always "get this done by noon from your desk".
What we SHOULD be testing is people's abilities to find answers, not memorize them. I want someone with amazing Google skills. Because Google knows far more than any one person could ever hope to even scratch the surface of.
We need to think about the Internet and computers as tools. We wouldn't ask a carpenter to build a house with nails and a hammer and a saw (etc). Not even on a test, just to see if he could do it. Tradespeople are trained with their tools in hand.
For knowledge workers, the Internet is our biggest tool. AI is a tool. If I was a teacher, I'd want to test my students on their ability to make the most of the tools at their disposal, not how they perform in artificially constrained scenarios.
EDIT: I could imagine some exceptions that prove the rule. For example, a surgeon should know everything they need to know, because they can't just pull out a phone and Google it when they're elbows deep inside of someone. But to me this proves the rule, because it's about a real world application: If you're testing prospective surgeons, make sure they only have access to the information and tools they would have on the job.
I appreciate your thoughtful response. I failed to communicate my point. An untimed power test has a deadline. It’s also not about memorization. You use all of the resources at your disposal to complete the test by the deadline. The only rule is that you aren’t allowed to collaborate, and this rule is enforced by comparing the students’ answers after submission of the test.
As an aside: I’m not sure if you were trying to argue that a UPT isn’t appropriate for industry. Whether or not a university’s job has to do with industry (or if it’s being done appropriately) is a different discussion I have no interest in pursuing here or elsewhere on HN. That being said, it’s been proven by decades of research universally saying that the UPT is the ultimate tool for ensuring that the universities have done their job.
The point of my response was to communicate that ChatGPT has made it much harder to administer these types of exams because it’s so difficult to detect cheating. This is very different than saying ChatGPT is cheating. It’s not; it just makes students’ who use ChatGPT have such similar answers that you can’t tell if they cheated. This forces exams back into in-person, timed settings (without open internet access). It’s a miserable environment for someone with the aforementioned problems (anxiety, slow test-taking, dyslexia, etc.).
I agree. This reminds me of the argument that being able to use web search during a “coding interview” is cheating.
My stance is that if web search can render the difference between a competent and incompetent candidate undetectable, the problem is the interview task, not access to web search. (Not to mention problems with coding interviews in general.)
I’ll go out on a limb and say the same general principle applies here: If ChatGPT can pass a test, the test is measuring the wrong thing.
> if web search can render the difference between a competent and incompetent candidate undetectable, the problem is the interview task, not access to web search
;-)
My take is that the problem of distinguishing between competent and incompetent candidates in 20 minutes is hard (if not impossible), and interviewers may not be able to do so reliably.
Your take appears to be a generalization of my take in at least two axes:
1. Asserting that it's hard if not impossible to generate valuable signal, where I am speaking only to the case where access to web search makes it hard if not impossible to generate valuable signal, and;
2. I suspect you are also factoring in a very thorny problem, which is not just detecting candidates who are attempting the interview in good faith but are incompetent at the task given, but also detecting interviewers who are gaming the system by memorizing solutions to popular tasks.
Also, math changed after calculators became ubiquitous and questions became more about the concepts (which calculators don't help) rather than the arithmetic. ChatGPT seems to be good at reciting facts (that is, when it doesn't get them hilariously confused on occasion), but not so much at making the sorts of synthesis that a good essay entails.
This, in my opinion, is a very good thing. Learning _should_ be more about synthesis than about fact memorization anyways, it's recognized as one of the highest forms of learning under Bloom's Taxonomy.
While yes, it means that our education system will need to adapt, I hope it also means we'll be teaching our students better because they'll be encouraged to learn at a deeper level.
ChatGPT is surprisingly good at synthesis. For me that's where it provides the most value. You can ask it write an essay on a novel prompt and it's able to spit out something decent, if perhaps factually incorrect. A lot of work still to do here, but the progress leads me to believe that a decade from now essays written by people will be considered novel.
> this is something everyone can do, not just those with discretionary income
For now. The site says it's currently free because it's in a limited research preview. I'm curious what you would think if this was a $20/month subscription instead?
Curious as well. The past twenty years have shown that very few useful things remain free when they can be monetized. Maybe the basic version is still free as it assists with training, but you can bet the bottom dollar that there will be an "improved" version that is absolutely charged out.
I wonder how it'll go. If this ends up being used professionally, we may even see companies offer it for free to students to build a reliance on it (much like Microsoft when it comes to Word or Google with GSuite Education).
Is it cheating to pay someone to take the test for me? Isn't hiring specialized labor just another kind of tool? In the real world, I certainly have access to all sorts of specialists.
Yes, but the point of school is to train you to be able to work on these things. If you fail to see value in that, then why are you going to school in the first place?
This becomes the issue. If the school was there to teach you to reach an outcome, and the desired outcomes were difficult enough to achieve, then any (ethical) route you take should be deemed okay. For example, a school teaching you to build a particular device, run a research study, or build a business. If you hire for it, if you outsource, if you harness the power of AI, so long as you achieve the outcome, you've achieved the outcome.
Education is out of date for many areas of modern life. If it doesn't improve, it'll likely fall behind.
Brilliant. All the American kids can hire Chinese students to take tests for them because it will be a great lesson in how the real world operates junior!
So many utterly ridiculous comments in this thread.
> I don't consider that cheating. Others may feel differently.
This is the wrong focus, imo. You can ask "is it cheating?" as you do. Alternatively you could ask "but is this learning? Is it helping my kids grasp the subject better?"
Tools, "augmented thinking", etc. are all concerns regarding getting something done. But the goal for your kids is learning.
I think the comparison to a calculator isn't fair because a calculator serves a very discrete purpose. Your brain can compartmentalize a calculators function.
I have been using ChatGPT to code & write copy for a week and its abilities are so broad that my brain couldn't really slot it into a specific area. The result is my brain started reaching towards it every time it felt strained. I had a similar thing happen when I was googling a lot of info for work and then found myself considering searching for things like "when is my dads birthday?". My mind couldn't slot its function into a specific area like it can with a calculator.
"I'd rather my kids learn to use every tool at their disposal to be as fast, efficient, and effective as possible."
But that's not how (pre college) education works for the most part, unfortunately. It's lots of fact learning and essay busywork, not a lot of actual problem solving and critical thinking.
My public high school education largely focused on problem solving and critical thinking. Seems like GPTchat could possible be just the kick needed to upend the old system.
In the US I think it actually comes from a factory work mindset where everyone does the same kind of thing and learns the same way. The system is at least 50 years behind but in my experience I think teachers are much more attuned to the present.
There are different definitions of cheating. You have to look at student handbooks or a teacher or professor's syllabus to find out what constitutes cheating. Although I think designing a test around the possibility of people having access to ChatGPT will have to be included in the future.
But it absolutely could be the case that using ChatGPT is considered cheating like in a case where students are forbidden from using any other resources. OTOH, for tests that were previously "open internet" I assume ChatGPT is permitted.
An interesting point of contention here could be if the teacher says you cannot collaborate with anyone else. Does ChatGPT count as a person here? I would think the intention here is to restrict use of ChatGPT but it is not considered a person traditionally.
What is the difference between copying and pasting an essay from the internet and copy and pasting an essay from Chat-GPT, which is a sample from a large training set of essays from the internet?
> For example, I was helping my kid out with their Java homework and we we're both stuck for a good hour. Finally I loaded the question into ChatGPT.
> The answer that came back helped us solve the problem.
But maybe the real lesson in ChatGPT, or a near future descendant, is replacing human programmers altogether?
Don’t be silly. There’s no way to do a writing course without extensively writing about a prompt.
With chatGPT just plug it in and out pops your finished essay.
You have learnt nothing besides some minor comprehension skills.
Same with programming courses. It will answer all basic coding prompts. You cannot learn by just reading through a solved solution no matter how much you want to make the case for it.
Does using a calculator make you a cheater at math? No, you still needed to understand the concepts.
Does using ChatGPT make you a cheater at school? While ultimately that's up to the schools to decide, I would argue it shouldn't. Because you need to have enough understanding to ask the right question as well as to be able to spot what's wrong in the answer.
For example, I was helping my kid out with their Java homework and we we're both stuck for a good hour. Finally I loaded the question into ChatGPT.
The answer that came back helped us solve the problem. But we didn't just cut and paste. We looked at that solution and compared it to our own to find the problem.
I don't consider that cheating. Others may feel differently.
Ultimately, I think of this as augmented thinking. In the real world, we all use whatever tools we have at our disposal. If in the real world we have access to Google and calculators and now AI chatbots, why should we train and educate ourselves as if those don't exist?
I'd rather my kids learn to use every tool at their disposal to be as fast, efficient, and effective as possible. And unlike paying someone to do your homework, this is something everyone can do, not just those with discretionary income. So I really have no problem with this at all.