Hacker News new | past | comments | ask | show | jobs | submit login

Eyeballing 3rd party annotated die shots [1], it’s about the size of two GPU cores, but achieves 15.8 tflops. Which is more than the reported 14.7 tflops of the 32-core GPU in the binned M4 Max.

[1] https://vengineer.hatenablog.com/entry/2024/10/13/080000






Not really. That's 15.8 fp16 ops compared to 14.7 fp32 ops (that are actually useful outside AI). It would be interesting to see if you can configure the ANE to recover fp32 precision at lower throughput [1].

[1] https://arxiv.org/abs/2203.03341


Apple GPUs run fp16 at the same rate as fp32 except on phones, so it is comparable for ML. No one runs inference from fp32 weights.

But the point was about area efficiency




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: