I've also found that structure is key instead of trusting its streams of consciousness.
For unit testing, I actually pre-write some tests so it can learn what structure I'm looking for. I go as far as to write mocks and test classes that *constrain* what it can do.
With constraints, it does a much better job than if it were just starting from scratch and improvising.
There's a numerical optimization analogy to this: if you just ask a solver to optimize a complicated nonlinear (nonconvex) function, you will likely get stuck or hit a local optimum. But if you carefully constrain its search space, and guide it, you increase your chances of getting to the optimum.
LLMs are essentially large function evaluators with a huge search space. The more you can herd it (like herding a flock into the right pen), the better it will converge.
For unit testing, I actually pre-write some tests so it can learn what structure I'm looking for. I go as far as to write mocks and test classes that *constrain* what it can do.
With constraints, it does a much better job than if it were just starting from scratch and improvising.
There's a numerical optimization analogy to this: if you just ask a solver to optimize a complicated nonlinear (nonconvex) function, you will likely get stuck or hit a local optimum. But if you carefully constrain its search space, and guide it, you increase your chances of getting to the optimum.
LLMs are essentially large function evaluators with a huge search space. The more you can herd it (like herding a flock into the right pen), the better it will converge.