Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

you might also be interested in the recent work on "resonator networks" VSA architecture [1-4] by Olshausen lab at Berkeley (P. Kanerva who created the influential SDM model [5] is one of the lab members).

It's a continuation of Plate [6] and Kanerva work in the 90s and Olshausen' groundbreaking work on sparse coding [7] which inspired the popular autoencoders [8].

I find it especially promising they found this superposition based approach to be competitive with optimization so prevalent in modern neural nets. May be backprop will die one day and be replaced with something more energy efficient along these lines.

[1] https://redwood.berkeley.edu/wp-content/uploads/2020/11/frad...

[2] https://redwood.berkeley.edu/wp-content/uploads/2020/11/kent...

[3] https://arxiv.org/abs/2009.06734

[4] https://github.com/spencerkent/resonator-networks

[5] https://en.wikipedia.org/wiki/Sparse_distributed_memory

[6] https://www.amazon.com/Holographic-Reduced-Representation-Di...

[7] http://www.scholarpedia.org/article/Sparse_coding

[8] https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf



Thank you for the great reading material! From a skim my take is that resonator networks are able to sift through data and suss out features (factors) from the noise, and even decode data structures like vectors and mappings. And RNs can be made to iterate on a problem much like a person might concentrate on a mental task. Is that a fair summary of their capabilities?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: