Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Think of it like discovering a new human language. We use logic along with knowledge about the world to triangulate words that deal with concepts.

ML models work in the same way. Difference is they aren't human so it's harder to just use your empathy muscles (though even humans cut off from the rest of the world can come to pretty wild perspectives and ways of thinking). But they are logical, in some respect, and they are modeling our human perspective on some process in the world. But as a sibling comment posted in a link to the Distill paper, we need a lot of tools to make that process easier.

For example, very often researchers will probe single neurons and find they do something we find conceptually understandable, like a neuron detecting when you're inside a parenthetical when generating text so that the parenthesis is eventually closed. I'd expect the "symbols" to be very similar, because after all the neurons are symbols too. Both require you to either relate it to a small concept or put some together to create bigger concepts (or both).



Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: