Hacker Newsnew | past | comments | ask | show | jobs | submit | perplex's commentslogin

Happy user of spaceship prompt for zsh: https://spaceship-prompt.sh, and, among other things, it runs repo status asynchronously.


> I really don’t think there’s a coherent pro-genAI case to be made in the education context

My own personal experience is that Gen AI is an amazing tool to support learning, when used properly.

Seems likely there will be changes in higher education to work with gen AI instead of against it, and it could be a positive change for both teachers and students.


>Seems likely there will be changes in higher education to work with gen AI instead of against it, and it could be a positive change for both teachers and students.

Since we're using anecdotes, let me leave one as well--it's been my experience that humans choose the path of least resistance. In the context of education, I saw a large percentage of my peers during K-12 do the bare minimum to get by in the classes, and in college I saw many resorting to Chegg to cheat on their assignments/tests. In both cases I believe it was the same motivation--half-assing work/cheating takes less effort and time.

Now, what happens when you give those same children access to an LLM that can do essentially ALL their work for them? If I'm right, those children will increasingly lean on those LLMs to do as much of their schoolwork/homework as possible, because the alternative means they have less time to scroll on Tik Tok.

But wait, this isn't an anecdote, it's already happening! Here's an excellent article that details the damage these tools are already causing to our students https://www.404media.co/teachers-are-not-ok-ai-chatgpt/.

>[blank] is an amazing tool ... when used properly

You could say the same thing about a myriad of controversial things that currently exist. But we don't live in a perfect world--we live in a world where money is king, and often times what makes money is in direct conflict with utilitarianism.


> Now, what happens when you give those same children access to an LLM that can do essentially ALL their work for them? If I'm right, those children will increasingly lean on those LLMs to do as much of their schoolwork/homework as possible, because the alternative means they have less time to scroll on Tik Tok.

I think schools are going to have to very quickly re-evaluate their reliance on "having done homework" and using essays as evidence that a student has mastered a subject. If an LLM can easily do something, then that thing is no longer measuring anything meaningful.

A school's curriculum should be created assuming LLMs exist and that students will always use them to bypass make-work.


>A school's curriculum should be created assuming LLMs exist and that students will always use them to bypass make-work

Okay, how do they go about this?

Schools are already understaffed as is, how are the teachers suddenly going to have time to revamp the entire educational blueprint? Where is the funding for this revolution in education going to come from when we've just slashed the Education fund?


I'm not an educator, so I honestly have no idea. The world has permanently changed though... we can't put the toothpaste back into the tube. Any student, with a few bucks and a few keystrokes, can instantly solve written homework assignments and generate an any-number-of-words essay about any topic. Something needs to change in the education process, but who knows what it will end up looking like?


I would think that at least part of the solution would have to involve having students do more work at school instead of as homework.


Okay, and how do you make room for that when there's barely enough time to teach the curriculum as is?


Obviously something has to give.


This is what I meant in my other comment. Proponents of AI (not necessarily you) haven't seriously considered how these tools will impact the population.

Until they come up with a semblance of a plan, teachers will experience an undue burden to slog through automated schoolwork assignments, cheating, and handle children who lack the critical faculties to be well functioning members of society.

It's all very depressing.


> "If an LLM can easily do something, then that thing is no longer measuring anything meaningful."

An automobile can go quite far and fast but that doesn't mean the flabbiness and poor fitness of its occupants isn't a problem.


>> an amazing tool to support learning, when used properly.

how can kids, think K-12, who don't even know how to "use" the internet properly - or even their phones - learn how to learn with AI? The same way social media and mobile apps made the internet easy, mindless clicking, LLMs make school a mechanical task. It feels like your argument is similar to LLMs helping experienced, senior developers code more effectively, while eliminating many chances to grow the skills needed to join that group. Sounds like you already know how to learn and use AI to enhance that. My 12-yr-old is not there yet and may never get there.


>> how can kids, think K-12, who don't even know how to "use" the internet properly - or even their phones - learn how to learn with AI?

For every person/child that just wants the answer there will be at least some that will want to know why. And these endlessly patient machines are very good at feeding that curiosity.


>For every person/child that just wants the answer there will be at least some that will want to know why

You're correct, but let's be honest here, the majority will use it as a means to get their homework over and done with so they can return to Tik Tok. Is that the society we want to cultivate?

>And these endlessly patient machines are very good at feeding that curiosity

They're also very good at feeding you factually incorrect information. In comparison, a textbook was crafted by experts in their field, and is often fact checked by many more experts before it becomes published.


And the carefully checked textbooks are just as full of factually incorrect information. If you doubt this, look at any textbook from 50+ years ago; they were also carefully checked--more so than today's--and yet contained many things we now know to be incorrect. In fifty years, our present textbooks will look just as bad, if not worse (seriously; look at a modern K-12 textbook).

So the key thing to get across to kids is that argument by authority is an untrustworthy heuristic at best. AI slop can even help with this.


> My 12-yr-old is not there yet and may never get there.

Wouldn't class room exams enforce that though? Like, imagining LLMs like an older sibling or parent that would help pupils cheat on essays.


The issue with education in particular is a much deeper issue which gen AI has ripped bandages off and exposed the wound to the world, while also greatly accelerating its decay, but it was not responsible for creating it.

What is the purpose of education? Is it to learn, or to gain credentials that you have learned? Too much of education has become the latter, to the point we have sacrificed the former. Eventually this brings down both, as a degree gains a reputation of no longer signifying the former ever happened.

Or existing systems that check for learning before granting the degree that showed an individual learned were largely not ready for the impact of genAI and teachers and professors have adapted poorly. Sometimes due to lack of understanding the technology, often due to their hands being tied.

GenAI used to cheat is a great detriment to education, but a student using genAI to learn can benefit greatly, as long as they have matured enough in their education process to have critical thinking to handle mishaps by the AI and to properly differentiate when they are learning and when they are having the AI do the work for them (I don't say cheat here because some students will accidentally cross the line and 'cheat' often carries a hint of mens rea). To the mature enough student interested in learning more, genAI is a worthwhile tool.

How do we handle those who use it to cheat? How do we handle students who are too immature in their education journey to use the tool effectively? Are we ready to have a discussion about those learning who only care for the degree and the education to earn the degree is just seen as a means to an end? How to teachers (and increasingly professors) fight back against the pressure of systems that optimize on granting credentials and which just assume the education will be behind those systems (Goodhart's Law anyone)? Those questions don't exist because of genAI, but genAI greatly increased our need to answer them.


I think he is talking education as in school/college/university rather than learning?

I too am finding AI incredibly useful for learning, I use it for high level overviews and to help guide me to resources (online formats and books) deeper dives. Claude has so far proven to be an excellent learning partner, no doubt other models are similarly good.


That is my take. Continuing education via prompt is great, I try to do it every day. Despite years of use I still get that magic feeling when asking about some obscure topic I want to know more about.

But that doesn't mean I think my kids should primarily get K-12 and college education this way.


Computer and internet has been around for 20 years and yet the evaluation systems of our education has largely remained the same.

I don't hold my breath on this.


Where are you located? The Internet boom in the US happened in the mid-90's. My first part-time ISP job was in 1994.


dial-up penetration in the mid-90's was still very thin, and high-speed access limited to universities and the biggest companies. Here's the numbers ChatGPT found for me:

* 1990s: Internet access was rare. By 1995, only 14% of Americans were online.

* 2000: Approximately 43% of U.S. households had internet access .

* 2005: The number increased to 68% .

* 2010: Around 72% of households were connected .

* 2015: The figure rose to 75% .

* 2020: Approximately 93% of U.S. adults used the internet, indicating widespread household access .


Yes, it was thin, but 1995 - 96 was when "Internet" went mainstream. Depending on your area, you could have several dialup ISP options. Major metros like Boston had dozens. I remember hearing ISP ads on the radio!

1995 was when Windows 95 launched, and with its built in dialup networking support, allowed a "normal" person to easily get online. 1995 was the Netscape IPO, which kicked off the dot-com bubble. 1995 was when Amazon first launched their site.


I don't think LLMs replace thinking, but rather elevate it. When I use an LLM, I’m still doing the intellectual work, but I’m freed from the mechanics of writing. It’s similar to programming in C instead of assembly: I’m operating at a higher level of abstraction, focusing more on what I want to say than how to say it.


The writing is the work, though. The words on paper (or wherever) are the end product, but they are not the point. See chapter 5 of Ahrens, Sönke. 2017. How to take smart notes: one simple technique to boost writing, learning and thinking - for students, academics and nonfiction book writers., for advice on how writing the ideas in your own words is the primary task and improves not only writing, but all intellectual skills, including reading and thinking. C. Wright Mills in his 1952 essay, "“On Intellectual Craftsmanship" says much the same thing. Stating the ideas in your own words is thinking.


If you do not know how to say something, you don't know how to say it.


When I microwave a frozen meal for dinner, I'm still a chef, but I'm freed from the mechanics of preparing and assembling ingredients to form a dish.


You can also use a microwave to bloom spices, or thaw frozen veggies from your home garden, or steam things, or thicken sauces, …

The microwave is a tool with certain useful aspects and certain limitations. It is also a tool which can lead to faster outcomes of things you need to do if the tool didn’t exist. At what point should a chef draw the line in the tools they use? Should I forgo microwaves? What about pressure cookers? Ovens? Surely knives are fair game? Maybe I should knap flint and butcher meat with it and cook over an open campfire — then truly no one can claim I am not a chef.


I've been using LLMs as learning tools rather than simply answer generators. LLMs can teach you a lot by guiding your thinking, not replacing it.

It's been valuable to engage with the suggestions and understand how they work—much like using a search engine, but more efficient and interactive.

LLMs have also been helpful in deepening my understanding of math topics. For example, I’ve been wanting to build intuition around linear algebra which for me is a slow process. By asking questions to LLM I find explanations make the underlying concepts more accessible.

For me it's about using these tools to learn more effectively.


> OpenAI was last valued at $157 billion in October, when it raised $6.6 billion. A near-doubling of its value in just a few months would be extraordinary even by the standards of Silicon Valley’s current AI boom.

Seems way overvalued for not having a moat. https://semianalysis.com/2023/05/04/google-we-have-no-moat-a...

Edit: formatting


There must something different between corn syrup and sucrose.

I have a corn intolerance where corn and corn byproducts trigger a migraine type headache.

I assume there must be corn byproducts in corn syrup because it absolutely triggers my corn reaction, but cane sugar does not.

Also from a purely taste standpoint there is a big difference to me as well.

Lastly, you can’t make peanut brittle chewy without corn syrup, with sugar alone (which is how I make it) it’s very crunchy.


Corn syrup is not HFCS. HFCS is just glucose and fructose. Corn syrup has a bunch of corn starches and various other sugars, so that makes sense.


Next we'll need dolphin unions. This is actually the sub-plot of the John Scalzi novel "Starter Villain".


+1 Watch the talk to the very end. It’s great.


My uncle was career officer in the Army and told me once that the M16, with its smaller faster bullet would wound the enemy, compared to the AK47 with its large slow bullet would kill, and that a wounded enemy was more expensive than a dead solider because you had to extract and care for the wounded.

Not sure if there was any factual evidence behind his theory, but I found this idea fascinating when I heard it.

See also https://en.wikipedia.org/wiki/Comparison_of_the_AK-47_and_M1...


Short-term, perhaps, but overall the cost of replacing a dead soldier is higher (like 15 years of schooling and parenting and all the other costs). A wounded soldier might be back on the battlefield half a year later.


I started skateboarding two years ago in my backyard and this year I started skating at my local skatepark. I'm all padded up for safety and even use hip pads. It's a very fun and cheap hobby. Seeing a lot of progression so it's very fun.


Such a great sport - got back into it several years ago, and while I haven't progressed all that much (I'm more of a roll around parks guy than a pop an amazing trick guy), I do enjoy it immensely


Hey! I’ve been skating for a few years myself! Have fun keeping at it!


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: