Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On a similar note, I can't wait for LLMs to digest _all_ the research papers readable enough for them and accessible, "take notes" in an index-suitable format/structure, and then act similar to a human who'd done that over an obviously more limited corpus: respond to questions by translating them into relevant key words, looking them up, _skimming the contents again,_ and finding relevant information. Might not be useful, and thus necessitate further visits to the index/library.

With the needed preprocessing, a LLM that can "go and do some research to adequately respond" could be extremely powerful.

We've spent the last ~10 millennia improving knowledge management technology to scale beyond the capacity/time of individual brains. Let the language model use actual research on this and pre-digest, not just Bing search. No need for it's short term memory to remember what say piece of code did something, just tag it when reading and rely on scalable shared indexing of tags.

Though the more I think about it, the more it sounds like normal LLM pretraining with the knowledge index being the giant chunk of LLM weights.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: