Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Whatever comes out of any LLM will directly depend on the data you feed it and which answers you reinforce as correct. There is nothing unknown or mystical about it. I honestly think that the main reason big tech claims they “don’t understand how they work” is either to avoid responsibility for what comes out of them or as a marketing strategy to impress the public.

EDIT: By the way, I definitely think LLMs are intelligent and could even be considered “synthetic minds.” That’s not to say they are sentient, but they will definitely be subject to all kinds of psychological phenomena, which is very interesting. However, this is outside the scope of my initial comment.






> Whatever comes out of any LLM will directly depend on the data you feed it

Right, and whatever comes out of Conway's Game of Life will directly depend on its initial setup as well. Show me a configuration of Conway's Game of Life that is tailored to emulate human speech and trained on the entire internet and then tell me your prediction of how it will evolve. You will get it completely wrong. Emergent behavior is a real thing.

> There is nothing unknown or mystical about it.

Almost all researchers and practitioners in the field seem to disagree with you on this. It is surprising that teaching a system to be extremely good at auto-completing English text is enough for it to develop an ability to reason. I happen to believe that this is more of an emergent property of our language than of neural networks, but it was definitely not predicted by almost anyone, not easily explainable, and maybe even a bit mystical-feeling.

Ph.D. dissertations have been published about trying to understand what is happening inside large neural networks. It's not as simple and obvious as you make it out to be.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: