Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I dunno if it's always good at explaining code. It tends to take everything at face value and is unable to opinionatedly reject bs when it's presented with it. Which in the majority of cases is bad.


this is also my problem. When I ask someone a technical question, and I did not provide context on some abstractions. Usually this is common because abstractions can be very deep. "Hmm, not sure.. can you check what's this supposed to do?"

LLMs don't do this, it confidently hallucinate the abstraction out of thin air or uses their outdated knowledge store. Sending wrong use or wrong input parameters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: