Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The model isn't based on any rules: it's entirely implicit. There are no subjects and no logic involved.

In theory a LLM could learn any model at all, including models and combinations of models that used logical reasoning. How much logical reasoning (if any) GPT-4 has encoded is debatable, but don’t mistake GTP’s practical limitations for theoretical limitations.



> In theory a LLM could learn any model at all, including models and combinations of models that used logical reasoning.

Yes.

But that is not the same as GPT having it's own logical reasoning.

An LLM that creates its own behavior would be a fundamentally different thing than what "LLM" is defined to be here in this conversation.

This is not a theoretical limitation: it is a literal description. An LLM "exhibits" whatever behavior it can find in the content it modeled. That is fundamentally the only behavior an LLM does.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: