Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could you pop your code into an LLM and ask it to write comments for you? I'm not sure how accurate it is though


I've noticed leading models fail to understand what's happening in undocumented neural network code as well, so not yet it seems.


It may be a reasonable approach if you give the model a lot of clues to start with. Basically tell it everything you do know about the code.

I wouldn't expect miracles from just uploading a big .py file and asking it to add comments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: