Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think Apple has missed out on much (yet). The best LLM's (e.g. gpt4o, sonnet 3.7) are no where near being able to run locally and still make mistakes.

Some LLMs can run locally, but are brutally slow with small context windows.

Apple is likely waiting until you can run a really good model on device (i.e. iOS), which makes sense to me. It's not like they're losing customers over this right now.



They are playing the long game, which is what has always been: wait until the silicon enables that for most users. The Apple Silicon track record suggests that... wait a couple of years and we'll get M3-Ultra-class capabilities in all of Apple devices. Some day the lowest bar will be above running state of the art LLMs on device.


Siri hasn't run on device for most of its existence. It's only in the last few years that Apple suddenly decided it was a priority.


All they have to show is incremental improvements over Siri. For that, Quantized models are more than enough in my opinion.


Sonnet 3.7 best? That thing is a dumpster fire. Totally useless vs 3.5.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: