Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You don't get anywhere faster "learning" from something that lies to you 20% of the time.

It's a bit like working with a bad colleague who is very fast, but very arrogant. You can't trust what they say because they're wrong often enough to make costly mistakes common. But you can't fight them on every little thing, either. The only solution is to already be an expert, and ignore them when they're wrong.

I honestly believe AI -- if it has a dramatic impact at all -- will only reduce the value of junior employees.



My entire formal education and subsequent career stands in opposition to this statement. Unless you mean that learning requires being lied to more than 20% of the time.


That sounds like exaggeration to me, in service of a bias against "formal education". But OK. YMMV.

My statement applies only to the experience of working with the things, and relates not at all to "formal education". If I have to learn the subject to debug what they're putting out, then the rate-limiting step of using them is...learning the subject. Same as it ever was.

Having a stochastic parrot spit a stream of 20% nonsense at me doesn't make learning go faster -- it definitely does make work go faster if I'm already an expert, however.


I actually think the time I spent in school was valuable and that formal education gets a bad rap around here. I believe that my teachers all had good intentions. But they weren't always right. In my experience nobody is right 80% of the time.

I'm skeptical of the AI hype but I do believe there is value. Similar to self driving cars an AI assistant or teacher doesn't have to be right all the time, it only has to be right more often. Proper use of this tool will require skillbuilding like anything else.


Oh, I'm not saying there's no value, just that I don't think the value is nearly the magnitude being hyped, and certainly not for the "speeding up the junior to senior transition" posited by the comment at the top of the thread. And sure, every teacher is wrong at some rate -- but the way we deal with that is by thinking for ourselves, asking lots of teachers, working out the differences, etc. This inherently takes time.

Pick some domain that you know nothing about, and ask a transformer model to solve a known problem in the space. It will give you a reply. Is the reply correct? Assuming that you even know how to ask the right question to get a sensible answer (which itself requires expertise), assessing the quality of the answer certainly requires expertise that you don't have. So either you figure it out for yourself (as slow as learning from any other source), or you take it on blind faith.

If I had to wager on the area where I think these models are going to lead to big changes, it's reading and summarization, not generation. "Describe how node deletion in b-trees works in 500 words" is a heck of a lot more useful than asking a transformer to write code to implement node deletion in a b-tree.


I can't really speak for previous posts, but I've personally seen a huge uptick in how fast I can move through material using LLMs - not because I'm asking them how things work or to explain things to me, but because they remove the slog of memorizing ancillary boilerplate and help me find the terms I need to search the docs for.

These are very specific things that you can't get as efficiently elsewhere. Essentially, it's not so much about the language model outputting information that I can understand, but more about the language model being able to parse MY queries and respond with something that's kinda good-enough.

Say you're looking at a chunk of code you didn't write, and you want to know what function foo_man_chu() is doing. You could go straight to the documentation for that function, but in many cases it's interacting with a bunch of other systems that are documented elsewhere, and not referenced in that material. If you ask GPT to explain it to you, it pops out a (reliable enough) list of things to RTFM on. This takes what could be a couple hours deep-reading through stack / google and distills it down to like ten minutes.

This effect is magnified the less you know about the domain, if you do it right.

Multiply that by the amount of times you have to do this in the course of teaching yourself something and it adds up quite a bit. This may be what the parent is talking about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: