Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Has anyone tried limiting Cursor or Cline, etc, to a higher level role such as analysis and outlining proposed changes, and then coding those changes yourself with minimal LLM interaction? Aka, ask to define / outline only a high level set of changes, but do no actual changes to any file; then proceed through the outlined work, limiting Cursor to roughing out the work and hand writing the actual critical bits? That’s the approach I’ve been taking, a sort of best of both worlds that greatly speeds me up without taking the hands 100% off the wheel.


This seems like the worst of both worlds. The human still has to do the "boring" work of writing out all the boiler plate stuff, but now there is a machine telling the human what to do. Oh, and the machine is famously not great at big question type stuff while being being much more performant at churning out boilerplate.


I find the opposite. I tend to think through the problem myself, give cursor/claude my understanding, guide it through a few mistakes it makes, have it leave files at 80% good enough as it codes and gets stuck, and then spend the next 20 min or so cleaning up the changes and fixing the few wire up spots it got wrong.

Often I will decompose the problem into smaller subproblems and feed those to cursor one by one slowly building up the solution. That works for big tickets.

For me the time saving and force multiplier isn't necessarily in the problem solving, I can do that faster and better in most cases, but the raw act of writing code? It does that way faster than me.


Yeah that’s been my approach as well - and honestly I’m not even sure that it’s necessarily faster, it’s just different. Sometimes I feel like getting my hands dirty and writing the code myself - LLMs can be good for getting yourself unstuck when you’re facing an obstacle too. But other times, I’d rather just sit back and dictate requirements and approaches, and let the robot dream up to implementation itself.


Yeah. Reasoning models like r1 tend to be good for architecting changes and less optimal for actually writing code. So this allows the best of both worlds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: