> I really don’t think there’s a coherent pro-genAI case to be made in the education context
My own personal experience is that Gen AI is an amazing tool to support learning, when used properly.
Seems likely there will be changes in higher education to work with gen AI instead of against it, and it could be a positive change for both teachers and students.
>Seems likely there will be changes in higher education to work with gen AI instead of against it, and it could be a positive change for both teachers and students.
Since we're using anecdotes, let me leave one as well--it's been my experience that humans choose the path of least resistance. In the context of education, I saw a large percentage of my peers during K-12 do the bare minimum to get by in the classes, and in college I saw many resorting to Chegg to cheat on their assignments/tests. In both cases I believe it was the same motivation--half-assing work/cheating takes less effort and time.
Now, what happens when you give those same children access to an LLM that can do essentially ALL their work for them? If I'm right, those children will increasingly lean on those LLMs to do as much of their schoolwork/homework as possible, because the alternative means they have less time to scroll on Tik Tok.
>[blank] is an amazing tool ... when used properly
You could say the same thing about a myriad of controversial things that currently exist. But we don't live in a perfect world--we live in a world where money is king, and often times what makes money is in direct conflict with utilitarianism.
> Now, what happens when you give those same children access to an LLM that can do essentially ALL their work for them? If I'm right, those children will increasingly lean on those LLMs to do as much of their schoolwork/homework as possible, because the alternative means they have less time to scroll on Tik Tok.
I think schools are going to have to very quickly re-evaluate their reliance on "having done homework" and using essays as evidence that a student has mastered a subject. If an LLM can easily do something, then that thing is no longer measuring anything meaningful.
A school's curriculum should be created assuming LLMs exist and that students will always use them to bypass make-work.
>A school's curriculum should be created assuming LLMs exist and that students will always use them to bypass make-work
Okay, how do they go about this?
Schools are already understaffed as is, how are the teachers suddenly going to have time to revamp the entire educational blueprint? Where is the funding for this revolution in education going to come from when we've just slashed the Education fund?
I'm not an educator, so I honestly have no idea. The world has permanently changed though... we can't put the toothpaste back into the tube. Any student, with a few bucks and a few keystrokes, can instantly solve written homework assignments and generate an any-number-of-words essay about any topic. Something needs to change in the education process, but who knows what it will end up looking like?
This is what I meant in my other comment. Proponents of AI (not necessarily you) haven't seriously considered how these tools will impact the population.
Until they come up with a semblance of a plan, teachers will experience an undue burden to slog through automated schoolwork assignments, cheating, and handle children who lack the critical faculties to be well functioning members of society.
>> an amazing tool to support learning, when used properly.
how can kids, think K-12, who don't even know how to "use" the internet properly - or even their phones - learn how to learn with AI? The same way social media and mobile apps made the internet easy, mindless clicking, LLMs make school a mechanical task. It feels like your argument is similar to LLMs helping experienced, senior developers code more effectively, while eliminating many chances to grow the skills needed to join that group. Sounds like you already know how to learn and use AI to enhance that. My 12-yr-old is not there yet and may never get there.
>> how can kids, think K-12, who don't even know how to "use" the internet properly - or even their phones - learn how to learn with AI?
For every person/child that just wants the answer there will be at least some that will want to know why. And these endlessly patient machines are very good at feeding that curiosity.
>For every person/child that just wants the answer there will be at least some that will want to know why
You're correct, but let's be honest here, the majority will use it as a means to get their homework over and done with so they can return to Tik Tok. Is that the society we want to cultivate?
>And these endlessly patient machines are very good at feeding that curiosity
They're also very good at feeding you factually incorrect information. In comparison, a textbook was crafted by experts in their field, and is often fact checked by many more experts before it becomes published.
And the carefully checked textbooks are just as full of factually incorrect information. If you doubt this, look at any textbook from 50+ years ago; they were also carefully checked--more so than today's--and yet contained many things we now know to be incorrect. In fifty years, our present textbooks will look just as bad, if not worse (seriously; look at a modern K-12 textbook).
So the key thing to get across to kids is that argument by authority is an untrustworthy heuristic at best. AI slop can even help with this.
The issue with education in particular is a much deeper issue which gen AI has ripped bandages off and exposed the wound to the world, while also greatly accelerating its decay, but it was not responsible for creating it.
What is the purpose of education? Is it to learn, or to gain credentials that you have learned? Too much of education has become the latter, to the point we have sacrificed the former. Eventually this brings down both, as a degree gains a reputation of no longer signifying the former ever happened.
Or existing systems that check for learning before granting the degree that showed an individual learned were largely not ready for the impact of genAI and teachers and professors have adapted poorly. Sometimes due to lack of understanding the technology, often due to their hands being tied.
GenAI used to cheat is a great detriment to education, but a student using genAI to learn can benefit greatly, as long as they have matured enough in their education process to have critical thinking to handle mishaps by the AI and to properly differentiate when they are learning and when they are having the AI do the work for them (I don't say cheat here because some students will accidentally cross the line and 'cheat' often carries a hint of mens rea). To the mature enough student interested in learning more, genAI is a worthwhile tool.
How do we handle those who use it to cheat? How do we handle students who are too immature in their education journey to use the tool effectively? Are we ready to have a discussion about those learning who only care for the degree and the education to earn the degree is just seen as a means to an end? How to teachers (and increasingly professors) fight back against the pressure of systems that optimize on granting credentials and which just assume the education will be behind those systems (Goodhart's Law anyone)? Those questions don't exist because of genAI, but genAI greatly increased our need to answer them.
I think he is talking education as in school/college/university rather than learning?
I too am finding AI incredibly useful for learning, I use it for high level overviews and to help guide me to resources (online formats and books) deeper dives. Claude has so far proven to be an excellent learning partner, no doubt other models are similarly good.
That is my take. Continuing education via prompt is great, I try to do it every day. Despite years of use I still get that magic feeling when asking about some obscure topic I want to know more about.
But that doesn't mean I think my kids should primarily get K-12 and college education this way.
dial-up penetration in the mid-90's was still very thin, and high-speed access limited to universities and the biggest companies. Here's the numbers ChatGPT found for me:
* 1990s: Internet access was rare. By 1995, only 14% of Americans were online.
* 2000: Approximately 43% of U.S. households had internet access .
* 2005: The number increased to 68% .
* 2010: Around 72% of households were connected .
* 2015: The figure rose to 75% .
* 2020: Approximately 93% of U.S. adults used the internet, indicating widespread household access .
Yes, it was thin, but 1995 - 96 was when "Internet" went mainstream. Depending on your area, you could have several dialup ISP options. Major metros like Boston had dozens. I remember hearing ISP ads on the radio!
1995 was when Windows 95 launched, and with its built in dialup networking support, allowed a "normal" person to easily get online. 1995 was the Netscape IPO, which kicked off the dot-com bubble. 1995 was when Amazon first launched their site.
My own personal experience is that Gen AI is an amazing tool to support learning, when used properly.
Seems likely there will be changes in higher education to work with gen AI instead of against it, and it could be a positive change for both teachers and students.