Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was going to comment the same about the Superposition hypothesis [0], when the OP comment (edit: Update: The OP commenter is (as pointed by other HN comments, the cofounder of Anthropic) behind the Superposition research) mentioned about "I've had a lot more success with: * The linear representation hypothesis - The idea that "concepts" (features) correspond to directions in neural networks", as this concept-per-NN-feature idea seems too "basic" to explain some of the learning which NNs can do on datasets. On one of our custom trained neural network models (not LLM, but audio-based and currently proprietary) we noticed the same of the ML model being able to "overfit" on a large amount of data despite not many few parameters relative to the size of the dataset (and that too with dropout in early layers).

[0] https://www.anthropic.com/research/superposition-memorizatio...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: