Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The point about using FP32 for training is wrong. Mixed precision (FP16 multiplies, FP32 accumulates) has been use for years – the original paper came out in 2017.


Fair enough, but that still uses a lot more memory during training than what DeepSeek is doing.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: