Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is this rant really necessary? Most models, especially ChatGPT4 can perform carry based addition and there is zero reason for them to fail at it, but the moment you start using quantized models such as the 5 bit mixtral 8x7b the quality drops annoyingly. Is it really too much to ask? It's possible and it has been done. Now I'm supposed to whip out a python interpreter for this stuff, because the LLM is literally pretending to be a stupid human, really?


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: