Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The marketing language seems to suggest they're insecure over quality and want to promote quantity. But I'm in the same boat as you - I would happily take 10 tok/sec of a correct answer instead of wasting an hour curating 4500 tok/sec throwaway answers. Benchmark performance matters 100x more than your latency.

If these "hot takes" extend into Morph's own development philosophy, then I can be glad to not be a user.



There's no amount of error rate that's acceptable to us - edits should always be correct. We've just found anecdotally the saving users time is just provably also very important for churn, retention and keeping developer flow state, right after accuracy.


Then why are you using a custom model instead of an industry-leading option?

I don't mean to be rude, but I can't imagine you're selling a product on-par with Claude 3.7. Some level of performance tradeoff has to be acceptable if you prioritize latency this hard.


We're not - our model doesn't actually think up the code changes. Claude-4 or Gemini still writes the code, we're just the engine that merges it into the original file.

Our whole thesis is that Claude and Gemini are extremely good at reasoning/coding - so you should let them do that, and pass it to Morph Fast Apply to merge changes in.


This is a code editing model. 10 tokens per second editing may as well not exist for any interactive use case.


Anyone can get 10 tok/sec - just tell the model to output the entire file with changes, rather than just the delta.

Whatever LLM you're using will have a baseline error rate a lot higher than 2%, so you're going to be reviewing all the code it outputs regardless.


yeah even claude is well over 11% error rates with search and replace




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: