Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know if you are indirectly referring to me, but I have done such an implementation, and those particular LLMs are very limited. Two things come to mind.

1. It is still correct that the limited "truth-seeking" that I expressed holds. With respect to the limited world model possessed by the limited training and limited dataset, such a model "seeks to understand" the approximate concept that I am imperfectly expressing that it has data for, and then generate responses based in that.

2. SotA models have access to external data, be it web search or RAG+vector database, etc.. They also have access to the Chain of Thought method. They are trained on datasets that enable them to exploit these tools, and will exploit these tools. The zero-to-hero sequence does not lead you to build such an LLM, and the one that you build has a very limited computational graph. So with respect to more... traditional notions of "truth seeking", these LLMs fundamentally lack the equipment to do that that SotA models have.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: