Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be cool to try to generate the "knowledge" as in Cyc automatically, from LLMs.


Or vice versa - perhaps some subset of the "thought chains" of Cyc's inference system could be useful training data for LLMs.


When I first learned about LLMs, what came to mind is some sort of "meeting of the minds" with Cyc. 'Twas not to be, apparently.


I view Cyc's role there as a RAG for common sense reasoning. It might prevent models from advising glue on pizza.

    (is-a 'pizza 'food)
    (not (is-a 'glue 'food))
    (for-all i ingredients
      (assert-is-a i 'food))



sure but the bigger models don’t make these trivial mistakes, and I’m not sure if translating the LLM english sentences into LISP and trying to check them is going to be more accurate than just training the models better


The bigger models avoid those mistakes by being, well, bigger. Offloading to a structured knowledgebase would achieve the same without the model needing to be bigger. Indeed, the model could be a lot smaller (and a lot less resource-intensive) if it only needed to worry about converting $LANGUAGE queries to Lisp queries and converting Lisp results back into $LANGUAGE results (where $LANGUAGE is the user's natural language, whatever that might be), rather than having to store some approximation of that knowledgebase within itself on top of understanding $LANGUAGE and understanding whatever ad-hoc query/result language it's unconsciously invented for itself.


Beyond just checking for mistakes, it would be interesting to see if Cyc has concepts that the LLMs don't or vice versa. Can we determine this by examining the models' internals?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: