Hacker News new | past | comments | ask | show | jobs | submit login

No, but you can understand them if given time. And you can rely on them to be some degree of reliable approaching 100% (and when they fail it will likely be in a consistent way you can understand with sufficient time, and likely fix).

LLMs don’t have these properties. Randomness makes for a poor abstraction layer. We invent tools because humans suffer from this issue too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: