Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At present, LLMs are basically Stack Overflow with infinite answers on demand... of Stack Overflow quality and relevance. Prompting is the new Googling. It's a critical base skill, but it's not sufficient.

The models I've tried aren't that great at algorithm design. They're abysmal at generating highly specific, correct code (e.g. kernel drivers, consensus protocols, locking constructs.) They're good plumbers. A lot of programming is plumbing, so I'm happy to have the help, but they have trouble doing actual computer science.

And most relevantly, they currently don't scale to large codebases. They're not autonomous enough to pull a work item off the queue, make changes across a 100kloc codebase, debug and iterate, and submit a PR. But they can help a lot with each individual part of that workflow when focused, so we end up in the perverse situation where junior devs act as the machine's secretary, while the model does most of the actual programming.

So we end up de-skilling the junior devs, but the models still can't replace the principal devs and researchers, so where are the principal devs going to come from?



>The models I've tried aren't that great at algorithm design. They're abysmal at generating highly specific, correct code (e.g. kernel drivers, consensus protocols, locking constructs.) They're good plumbers. A lot of programming is plumbing, so I'm happy to have the help, but they have trouble doing actual computer science.

I tend towards tool development, so this suggests a fringe benefit of LLMs to me: if my users are asking LLMs to help with a specific part of my API, I know that's the part that sucks and needs to be redesigned.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: