Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is me speculating along with you so don't take this as fact, but my sense is that the LLMs tool stack is getting "layerized" like network layer architectures.

Right now the space is moving fast so new concepts and things are getting introduced quite fast, and the ecosystem hasn't settled.

https://en.wikipedia.org/wiki/OSI_model#Layer_architecture

But like all other things with computers, like shells, terminals, GUIs etc we're getting there. Just faster than ever.



That's insightful. Thank you for sharing your work and the patient responses to everyone's questions.

Yesterday I started exploring a smaller Gemma3 model locally with Ollama, and it's clearly a level up from the previous model I was using (Llama3) in terms of instruction comprehension and the sophistication of responses. It's faster, smaller, and smarter.

I very much appreciate how such innovative technology is available for non-experts to benefit from and participate in. I think one of the best things about the emergence and evolution of LLMs is the power of open source, open standards, and the ideal of democratizing artificial intelligence and access to it. The age-old dream of machines augmenting the human intellect (Vannevar Bush, Doug Englebart, et al) is being realized in a surprising way, and seeing the foundational layers being developed in real time is wonderful.


Of course! Glad you can find models that work well for you and we're all learning together. Even on the "expert side" we're learning from what folks like yourself are doing and taking note so we can shape these models to be better you all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: