Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is not the case for LLMs. FP16/BF16 training precision is standard, with FP8 inference very common. But labs are moving to FP8 training and even FP4.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: