Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If, on the other hand, it’s knowledge that isn’t well (or at all) represented, and instead requires experience or experimentation with the relevant system, LLMs don’t do very well. I regularly fail with applying LLMs to tasks that turn out to require such “hidden” knowledge.

It's true enough that there are many tasks like this. But there are also many relatively arcane APIs/protocols/domains that LLMs do a surprisingly good job with. I tend to think it's worth checking which bucket a task falls into before spending hours or days hammering something out myself.

I think many devs are underestimating how arcane the knowledge needs to be before an LLM will be hopeless at a knowledge-based task. There's a lot of code on the internet.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: