Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

clearly your original comment was unfair.


Is it, though? The major selling point of coding LLMs is that you can use natural language to describe what you want. If minor changes to wording - the ones that would not make any difference with a human - can result in drastically worse results, that feels problematic for real-world scenarios.


The model is small, so it has weaker semantics.


I get that. But they are explicitly comparing it to Codex themselves.


The criticism stands if you have to continue to rewrite your "prompt" until you can coax out the correct desired output.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: