Hacker Newsnew | past | comments | ask | show | jobs | submit | handfuloflight's commentslogin

Here's a hint. What goes inside the inference engine is an array. You control that array every time you call for inference.

You're describing the classic developer dopamine loop, just faster now.

Spinning up test after test, tweaking parameters, chasing that "it works!" high, that's what debugging has always been.

You're doing the exact same thing with your code that you're criticizing him for doing with AI. Same sunk cost fallacy ("I've already spent 3 hours, might as well get it working"), same illusion of control, same "I'm on a roll" feeling when the tests finally pass.

The only difference is speed. He gets micro-hits every 10 seconds watching tokens stream. You get them every time you re-run your test suite. Same gambling structure, same reward circuit lighting up, you've just normalized yours because it happened slowly enough to not look like a slot machine.

And you're the one reducing it to "gambling" unless you're claiming human developers experience zero dopamine and write code with omniscient correctness the first time. If they don't, if there's iteration, failure, reward, then you're describing the same neurochemistry. You've just decided it only counts as "gambling" when it makes you uncomfortable.


Counter argument: one is actually capable of reasoning, the other is predicting the next token and brute forcing until checks pass.

running and slots are both addictive, so they must be equally bad for you right?

Pricing?

We do a two week trial and then it's $0.2 per file reviewed. Buying in bulk + optimizing rules gives a significant discount.

Does this produce actual lint rules, or are you templating out lint-like replies from a LLM using a response format?

If you're doing inference, just give me a cli that's userless and free. I'm happy to use left over codex plan tokens or gemini free tokens for this, and while the idea seems interesting and I might be upsellable to more features down the line, the price/offering is a non starter.


We combine determinism + LLMs to catch things a human would normally have to. If the LLM finds a violation, it generates a comment.

Big agree on the CLI being open and letting you bring your own inference provider. We’re holding off on it until we get more feedback from some of our hardcore users.


What are you using for "determinism"? Sounds to me like you might just be running eslint + et al and then charging a fee for it.

We use ast-grep for the determinism part. I should have clarified - we don’t charge for fully deterministic runs. Only ones where the LLM is involved as a judge.

Is that a "yes" on lint rules? AI needs determinism to block commits because once the slop hits code review, it's already a gigantic waste of time. AI needs self-correcting loops.

It supports fully deterministic rules, which we use LLMs to help you write.

Agreed on all of this too. This is why we built the CLI tool - to shift left the work.


What do we build now to reap the coming of the messianic era?

Sora, show me this.

I actually found in my case that is just self inertia in not wanting to break through cognitive plateaus. The AI helped you with a breakthrough hence the magic, but you also did something right in your constructing of the context in the conversation with the AI; ie. you did thought and biomechanical[1] work. Now the dazzle of the AI's output makes you forget the work you still need to do, and the next time you prompt you get lazy, or you want much more, for much less.

[1] (moving your eyes, hands, hearing with your ears. etc)


Google Chrome: Dangerous site Attackers on the site you tried visiting might trick you into installing software or revealing things like your passwords, phone, or credit card numbers. Chrome strongly recommends going back to safety. Learn more about this warning

Thanks for highlighting this, fixed now!

Needed to validate domain with Google Search console...


> My experience is it often generates code that is subtlety incorrect.

Have you isolated if you're properly honing in on the right breadth of context for the planned implementation?


Aah, he must be prompting it wrong

Disingenuous reduction.

And in my experience, it always comes down to "you're holding it wrong" or "that LLM is older than 20 minutes."

You know what they say, if everyone you meet is a jerk, then...

Agents are the oxen pulling the plow through the seasons... turning over ground, following furrows, adapting to terrain. RAG is the irrigation system. Prompts are the farmer's instructions. And the harvest? That depends on how well you understood what you were trying to grow.

Locusts are when LLMs inexplicably rewrite your existing code despite in-line and prompt instructions not to

Right this same sleight of hand is encoded in the language used in the announcement to make building on this platform to be attractive seeming.

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: