Hacker News new | past | comments | ask | show | jobs | submit login

Not to downplay the issue you raise but I haven't noticed this.

Every iteration I make on the prompts only make the request more specified and narrow and it's always gotten me closer to my desired goal for the PR. (But I do just ditch the worse attempts at each iteration cycle)

Is it possible that reasoning models combined with the actual interaction with the real codebase makes this "prompt fragility" issue you speak of less common?




No, I've played with all the reasoning models and they just make the noise and weirdness even worse. When I dig into every little issue, it's always something incredibly bespoke. Like the actual documentation that's on the internet is out of date for the library that was installed and the API changed, the way the one library works in one language is not how it works in the other language, just all manner of surprising things. I really learned a lot about the limits of digital representation of information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: