You're describing the classic developer dopamine loop, just faster now.
Spinning up test after test, tweaking parameters, chasing that "it works!" high, that's what debugging has always been.
You're doing the exact same thing with your code that you're criticizing him for doing with AI. Same sunk cost fallacy ("I've already spent 3 hours, might as well get it working"), same illusion of control, same "I'm on a roll" feeling when the tests finally pass.
The only difference is speed. He gets micro-hits every 10 seconds watching tokens stream. You get them every time you re-run your test suite. Same gambling structure, same reward circuit lighting up, you've just normalized yours because it happened slowly enough to not look like a slot machine.
And you're the one reducing it to "gambling" unless you're claiming human developers experience zero dopamine and write code with omniscient correctness the first time. If they don't, if there's iteration, failure, reward, then you're describing the same neurochemistry. You've just decided it only counts as "gambling" when it makes you uncomfortable.
Does this produce actual lint rules, or are you templating out lint-like replies from a LLM using a response format?
If you're doing inference, just give me a cli that's userless and free. I'm happy to use left over codex plan tokens or gemini free tokens for this, and while the idea seems interesting and I might be upsellable to more features down the line, the price/offering is a non starter.
We combine determinism + LLMs to catch things a human would normally have to. If the LLM finds a violation, it generates a comment.
Big agree on the CLI being open and letting you bring your own inference provider. We’re holding off on it until we get more feedback from some of our hardcore users.
We use ast-grep for the determinism part. I should have clarified - we don’t charge for fully deterministic runs. Only ones where the LLM is involved as a judge.
Is that a "yes" on lint rules? AI needs determinism to block commits because once the slop hits code review, it's already a gigantic waste of time. AI needs self-correcting loops.
I actually found in my case that is just self inertia in not wanting to break through cognitive plateaus. The AI helped you with a breakthrough hence the magic, but you also did something right in your constructing of the context in the conversation with the AI; ie. you did thought and biomechanical[1] work. Now the dazzle of the AI's output makes you forget the work you still need to do, and the next time you prompt you get lazy, or you want much more, for much less.
[1] (moving your eyes, hands, hearing with your ears. etc)
Google Chrome: Dangerous site
Attackers on the site you tried visiting might trick you into installing software or revealing things like your passwords, phone, or credit card numbers. Chrome strongly recommends going back to safety. Learn more about this warning
Agents are the oxen pulling the plow through the seasons... turning over ground, following furrows, adapting to terrain. RAG is the irrigation system. Prompts are the farmer's instructions. And the harvest? That depends on how well you understood what you were trying to grow.
reply