Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It doesnt suffice. It's also vastly energetically cheaper just to have (algorithmic) negation. Compressing (A, not A) into a probability function is extremely incomprehensibly expensive.


> It's also vastly energetically cheaper just to have (algorithmic) negation.

Even if true, that's an argument that it's cheaper to have something, not that it's cheaper to develop it through natural selection. Training time and energy for LLMs shows how energy intensive training to get to the point of grokking/circuit generalization.


It is a matter of empirical fact that we can reason with logical relationships. Thus taking an LLM and it's training as a model of conginition is empriically false.

It should be obviously doubly so, since as a model -- as you point out -- it makes trivial aspects of our cognition impossibly expensive to acqurie.


> It is a matter of empirical fact that we can reason with logical relationships

It is a matter of empirical fact that our ability to correctly reason with logical relationships only has high statistical likelihood, not certainty. This looks less like actual logic and more like a probabilistic model of logic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: