Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

    I've come up with a set of rules that describe our reactions to technologies:
    1. Anything that is in the world when you’re born is normal and ordinary and    is just a natural part of the way the world works.
    2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
    3. Anything invented after you're thirty-five is against the natural order of things.
-- Douglas Adams

Let me guess your age.



The younger generations do seem to be embracing AI more, but mainly because it can do their homework for them without requiring them to learn anything. For now at least, until curriculums have time to adjust to this new reality.


I initially picked up programming because I wanted to create things (well, initially break things, but moved on from that), and the programming was one way of doing that. I only learned how to structure my programs, because they became hard to change. I only learned testing and refactoring, because I noticed I was faster when the code was better and more tested, even if the upfront cost was slightly higher.

If I was 14-15 around this time, when I first picked up programming, but had an LLM on my side, I'm not sure what the outcome would be, to be honest. I'd use them, that's for sure, but once I got a working application out of them, would I be curious enough to understand as much as I understand now, if it wasn't required? Or would I have been able to learn even more and faster, since I wouldn't have been all alone banging my head against some trivial problem for weeks?


> Or would I have been able to learn even more and faster, since I wouldn't have been all alone banging my head against some trivial problem for weeks?

This was my experience. I’m not a programmer but I enjoy playing around with Python. I work with it a lot more and make far more complicated programs now because I don’t have to spend half an hour trying to find a solution to my problem on StackOverflow.

The issue I found is that I don’t bother trying to build anything on my own anymore, so even though I’ve learned a lot about designing programs, my knowledge of Python has probably actually declined.


> would I have been able to learn even more and faster, since I wouldn't have been all alone banging my head against some trivial problem for weeks?

Would even that have some downsides in the long-term, since the process of banging one's head could be crucial to rewiring your brain to understand new concepts.


Yeah, I'm guessing that most of my valuable knowledge comes from sessions like this, where something is hard, and eventually you solve it. If I was in the same situation today, I'd probably keep throwing stuff at the LLM until it got it right, and not really learn anything.

But then on the other hand, the point was never to learn, but to create. I'm still not sure if I'd be better or worse at actually creating things if I had LLMs in the beginning or not.


I think the argument made is, if there's no added value by understanding something deeper when it will inevitably do better than now, the only reason to dig deeper is for one's curiosity.

All that digging now translates to expertise that is critical and potentially lucrative. But if it's only going to get better then for how long will this be true?


Its like computing gave us the most powerful paintbrush of all time and we said, "nah, you do the painting".


> all alone banging my head against some trivial problem for weeks?

thanks for this lil' dose of copium, so at least it qualifies as character buildup what a relief


AI is just another abstraction. It's not like a senior Java dev can implement LocalDate.now() or CompletableFuture.await()


I don't know if it's "embracing" it, it's just a fact of life.

I remember in an interview with Marc Andreessen he spoke about introducing his 8 year old to chat gpt. He described the moment as monumental, likening it to "bringing fire down from the mountains." However, his son was unimpressed by the technology, responding, "It's a computer. Of course you ask it questions and it gives you answers. What else is it for?"


I compare it to something like PEDs or even painkillers. These too are a fact of life, but going down that road makes a lot of decisions for you.

AI in certain hands will be fine. In other hands it will be a disaster for the person that uses it, because they will not perform the reps they need to really be able to think. Believing a computer is a magic box that gives answers is not great, it's adjacent to "believe and do what the computer tells you".


I am old. But when I was young I could not comprehend why they wouldn't not let us use calculators for stupid calculation task that humans have a hard time to compute.


It's creating devastating effects in higher education here. I'm a bit older but did a masters after working a few years and I've now decided to quit because most - if not all - students just upload the sheets to an llm and copy the output. Group projects used to be really intense and interesting here - now my partners in the group-projects ask me to explain their code to them. It's not an ivy-league university here but it used to be that I had a lot of fun working hard with other students to work on the projects and we learned a lot doing this - this is completely gone. It's 100% transactional - how can I go through this as fast as possible - as a result people fail the exams at unseen rates like 50-80% in classes that can be passed by learning a few days and doing the exercises yourself.

I'm suffering from quite a bit of ADHD symptoms - for the past 20 years - I already got an diagnosis and I can survive even if it's a shitty thing to have - but it feels like now everyone around me shares the same fate and people seem to forgot how to work or study or worse - never learned it. I've used to be an outlier, sharing my fate with 2 or 3 other people in the class back when doing the bachelors - we failed in spectacular ways in some areas while outshining everyone else in other areas but it was a honest struggle. I'm okay with that - I'm not made to be a researcher or writing a PhD in computer science or math. But I can work in my area of expertise - however what is happening with all these graduates?

Is this the fault of AI? Not really but society isn't really prepared for what is happening now. People correcting the exercises tell me it's impossible to proof LLM usage and 90% of the results are just ChatGPT - funnily enough this was a machine learning 101 class.

Another thing I've noticed that often when looking at LLMs prompts from other students and their application of the results that they kind of don't help them to really learn and improve yourself - you are stuck on your level of knowledge and so are your prompts and the quality of questions you ask and the way you handle the answers which results in very weird effects. So you are talking with your group member to load some binary serialized arrays for a computer vision projects and use numpy to do some calculations. Next meeting you have some code that does something but it's using another dataset and completely different code, runs 100x slower and solves a slightly different problem. All you get is a shrug. I'm better off watching Youtube or working than staying in university. It's not the fault of the teachers but I've came back because of human interaction and because I don't want to learn alone. This is almost gone here.

All of this - even if it sucks - would be somehow okay but the thing I'm scared of the most lately is the blatant dishonesty and lying I've been seeing in other students about their usage of AI - it's creating a kind of person that only pretends to be able to understand what it's doing but fails reliable to actually understand what LLMs tell them. I'm not made to deal with this and I'm getting angry. Tell me you've used an llm and you are not sure about the results, we can talk about it, work through it and improve upon it. Then it's actually a great thing to have LLMs - but I'm not seeing it.

This will be interesting - not in the good way.


> It's 100% transactional - how can I go through this as fast as possible - as a result people fail the exams at unseen rates like 50-80% in classes that can be passed by learning a few days and doing the exercises yourself.

This is a funny post because this was my experience 20 years ago in college. Nothing new from LLMs here. My big takeaway from my experience in different universities (I attended several different ones) is that the content in all the universities is largely the same - the main difference is that you are picking the caliber of your classmates. My classmates in one technical, well-regarded university were MILES beyond the classmates in a local second-tier state university in terms of intelligence, passion, and drive; and those classmates were more attentive and intelligent than the ones in a third-rate community college.


Fair point. Very different people from very different background here in the masters in comparison to the bachelors. Also 15 years between both experiences. Maybe I'm just getting old and start to yell at imaginary clouds. Thanks for the heads-up! Still sucks for everyone involved there that is motivated through.


That's what happens when the entire education system is optimized for grades instead of the use and acquisition of knowledge. In this case, LLMs are just laying bare our failings as a society.


I'll bite.

I received my PhD in Computer Science focused on NLP and creative text generation last year and I think the hype around LLMs is ridiculous (academics are no better than industry on chasing hype). They're trained to predict the next token given a context, and that's exactly what they're good at.

How old do you think I am?


A transistor is just a way of connecting some wires, and that's exactly what it's good for. It's reducing a phenomenon into some core essence and pretending like there's not a bigger picture.


Doesn’t that feel a little bit like saying that the hype around transistor-based logic gates is ridiculous because they’re designed to execute Boolean logic, and that’s exactly what they’re good at? The simple mechanism isn’t what’s exciting. The exciting part is composing that into a symphony of functionality, running fast and cheap, to better our lives.


no, because before transistors we had vacuum tubes so the functionality of a transistor was well understood and the breakthrough was in size and power consumption.

the analogy would be more apt if tomorrow I could run ChatGPT 4o, the hosted model, on my wrist watch, and run it indefinitely for pennies.


3.5 turbo?


I was in my 20s when crypto was "it" and I was definitely on the last group about it, so it's definitely not just about age, even thought there's probably some correlation.


> "is new and exciting and revolutionary and you can probably get a career in it."

Does this not explain why you just got your PhD in this? ("This" being broad, but "NLP and creative text generation" sounds like it's in the same ballpark as LLMs.)


Nope. I did it purely due to long-term intellectual curiosity.

I first pursued "AI" in undergrad during the last AI winter. For example, the only professor who taught neural networks at Purdue was in the EE dept, not CS, and was retiring the semester I was first qualified to study it. There weren't enough seats in the class, and since it was graduate level, I wasn't allowed to take it as an undergrad.

I really tried every avenue I could think of at the time to pursue AI — taking part in Robocup, taking classical AI (also from the EE dept), etc. None of what I was exposed to seemed like it was pushing the the intellectual boundaries, so I instead got into video game AI as a way to pursue AI (a number of famous ML researchers like Demis Hassabis got their start in video games).

When I started my PhD a very tiny group of researchers were looking at text generation, let alone for creative text. The idea was very niche.

Note, I only pursued a PhD after I got an interview at OpenAI in 2017 that made me realize a PhD was likely necessary to pursue research.


Thanks. Sometimes I feel like I'm going insane attempting to ŕeason with people who think the opposite. That these are oracles imbued with human level intellect and creativity.

Now, sure, these models can be impressive - but it's a warped lens of humanities own impressive (selected) corpuses.


I'm in my mid-forties and I think the LLM revolution is amazing.

It reminds me of the dotcom era in many ways: a genuinely transformative technology which is currently no more than maybe 20% of the way into realising its potential; a technology for which expectations have been hyped up to maybe 200% of potential; and a technology around which a stockmarket bubble has formed.

I'll leave the rest of the LLM story to the reader's imagination, but to see this slightly fragmented and ossified mind it's extremely obvious what happens next, and then what happens after that, and then after that (which is when we get to the really good bit). So no, I'm not bored, and I'm not tired. I'm as happy to be working in technology now as when I was a younger man. Happier in some ways, even.


I'm a big fan of Douglas Adams, but there is a reason he was (best known as) a comedy writer and not, for example, a sociologist. Trotting this out adds nothing to the conversation and just comes across as vaguely ageist.


Call it ageist, but this aligns with the conversation about "it" at my job. The cutoff is around 42, but there is a significant split by age group of engineers on the value of "it".


There is some truth to it, but it also isn’t exactly accurate. #2 isn’t true for me in its generality. Some new things were exciting and revolutionary, but by no means all. Regarding #3, some new things still excite me today, and many more could excite me, but nobody is making them. Even #1 isn’t accurate. What’s true is that “excitability” goes down over time the more one is aware of the flaws and trade-offs.


I don't think it has to do with age but more than active years of work experience in the field. There is some strong correlation but, I'm 44 and with 3 years of experience I have integrated AI tools in my workflow because they're just tools right now and it would be silly not to leverage it.


I added them, because they are a plug-in like anything else, but rarely find a use for them. They are best at helping me remember syntax for single statements, nothing more. Like autocorrect but vaguer.


I found the parent comment humorous as it cites a lighthearted quote from Douglas Adams. it is relevant to the conversation in a similar way an xkcd is, when called relevant.


Based on those rules your guess would be wrong :)

This is not about age really: "NFTs"/"web3.0"/"Blockchain technologies" for instance were hyped by every age group.


"Always be wary of any helpful item that weighs less than its operating manual."

Terry Pratchett


This can be taken as an instance of the Shifting Baseline phenomenon [1]. The fact that we can only perceive certain changes over large timescales doesn't mean we can safely ignore them. It's harmful to ignore experienced perspectives.

[1]: https://en.wikipedia.org/wiki/Shifting_baseline


I do think this is true. I'm 46 and I find myself wondering when things are going to "return to normal". But I can't really define what that is besides saying "2019". I'm not even sure what I'm referring to other than I hate short form video. I don't know how I feel about AI. It does seem like something that has a lot of promise though if we can figure out the context issues.


I'm against calculators in schools due to their detriment on learning.

What's my age?


This was written at a time when people had lived through inventions like the internet, personal computers, refrigerators, microwave ovens, jet liners, vaccines, television, etc. While all of those inventions had negative externalities, their primary function overall improved people's lives.

Generative AI is all negative externalities.


Well, if you have ADHD, you are always fifteen. Novelty-seeking is built in, part of physiology, cannot be ignored. I am 42, and LLMs still provide a lot of excitement to me.


As much as I like Douglas Adams' work, I think that quote describes what, for lack of a better term, and in no way trying to be derogatory, I'd call a "normal person"'s reaction to technologies.

I mean obviously the age boundaries may change a bit but otherwise he's spot on. Also, I'd say it's not just technology as I've seen a similar attitude towards other things (e.g, how a dress code is not that common any more, or how you can walk up to a store or even go to a hospital and be greeted by someone with tattoos and green hair "is against the natural order of things").

And to stress again that I'm not being derogatory, I've got close people who have those reactions and I love them even if I disagree with them.

But for people not in that "normal" group my experience is nothing like that. I've seen: - People being amazed at how exciting something that existed when they were born is, to the point of building emulators and even physical replicas of it. - People way after 35 who go deep in on very new things that drastically change the way they work (the example I'm thinking of here is the old musicians I know who've embraced modelers and DAWs, things that only got good enough to be used by professionals in, what, the last 20 years? - In that same line though, I know young kids who use portastudios and hate DAWs. Some release cassettes too. One has to guess their target audience never wasted life untangling tape and rewinding with a ball pen ...

Still, I think your last line is most likely spot on as far as the author goes. Probably going through some mid-life crisis (lest this come across as ageist, I'm way past half my country's life expectancy).

Also, "Every pub conversation winding up talking about it."

That's just the author going to the wrong pubs! Like the Jonathan Richman song says:

"Well the first bar things were alright But in this bar, things were Friday night."


> Someone said something cool once hence it's valid ad vitam aeternam and can be used as an argument in a discussion.

Mass automatised eugenics robots ? Well if you don't like it you must be a boomer.

Brain implants that are controlled by your employer and can literally kill you on the push of a button if you don't follow the rules ? What ? you don't like it ? You dumb luddite

Between that and the "TeChNoLoGy iS JuSt a ToOl, a HaMmEr Is NeItHeR gOoD nOr BaD" people...


How is the moral dilemma of employer-controlled brain implants or eugenics equal to AI? The reason the quote can be applied is because it is a genuinely useful technology to lots of people. That's not the case for eugenics robots.

Did you grow up watching Data (Star Trek), C3PO (Star Wars), KITT (Knight Rider) thinking "Who comes up with these violent sadistic ideas"?


> How is the moral dilemma of employer-controlled brain implants or eugenics equal to AI?

Well it's just a technology, don't you love technology ? Technology is progress and progress is good!

> it is a genuinely useful technology to lots of people. That's not the case for eugenics robots.

Eugenics would objectively solve a lot of suffering though, your AI might even come to that conclusion

> Did you grow up watching Data (Star Trek), C3PO (Star Wars), KITT (Knight Rider) thinking "Who comes up with these violent sadistic ideas"?

That's how 80% of ai "enthusiasts" sound to me, "AI is good because these sci-fi AIs were cool when I was a kid"


Are you being obtuse on purpose?


His purse is empty already, all’s golden words are spent.

– William Shakespeare


Those rules do not work for me.

I hate AI.


well i dont know who in the world Douglas is, probably some geriatric weirdo but he is completely wrong.

am in the second group and ur comment just feels like ragebait, sure there are enough “hip and trendy” teenagers to twenty, thirty something people who like influence more than technology and are on their “vibe code” grind and idk, fucking their AI every night with their claude wrapped fleshlight but I for one am not one of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: