Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For a multiplication, I can work out any problem, and I have a sense of what it will do under any circumstance.

> Are you saying that you need to have developed high level descriptions of the behavior of a system in order to feel you understand it?

Yes. That's almost the definition of "understanding."

> What if there are no high level descriptions?

There are things we don't or can't understand. That's approximately Godel's Theorem. That likely includes some phenomena in fluid mechanics and in quantum mechanics. It may or may not include large-scale deep learning models.

It's okay to admit we don't, or can't, understand something.

> Or perhaps you mean that you already have a set of categories for output behavior and to truly understand something you need to be able to categorise the inputs and know which broad input categories result in which output categories?

There are different levels of understanding. However, with LLMs, I don't have a clear sense of under what conditions one might decide to, for example, eradicate humanity. I'd say that suggests I have a very limited understanding of them. I don't think there are many people with a better understanding than mine, and no one with a good understanding.

I feel like I understand a multiplication algorithm well enough to know it won't do that ever, however. If I multiply two numbers, I won't get a humanity-ending answer out.

I don't know if deep learning models have some analogue to emotions. I do know multiplication doesn't.

And so on.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: