Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The crux is how big the L is in the local LLMs. Depending on what it's used for, you can actually get really good performance on topically trained models when leveraged for their specific purpose.


There's alot of L's in LLLM, so overall it's hard to tell what you're trying to say...

Is it 'Local'?, 'Large?'...'Language?'


Clearly the Large part, given the context...LLMs usually miss stuff like this, funnily enough.


Do you see the C for Cheap in there? Me neither.


Sorry I'm not following. Cheap in terms of what, hardware cost?

From Apple's point of view a local model would be the cheapest possible to run, as the end-user pays for hardware plus consumption...


Username checks out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: