Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I always assumed ANN simply is a universal and learnable function approximator. That is, there is no direct equivalent of classical conditioning. Only data in, expected output pairs


There must be a minimal ANN architecture which implements classical conditioning. This architecture could be quite limited in what it can learn compared to ANNs in general. Similar to how feed-forward networks are limited compared to RNNs.


You can train single layer neural nets. Not very useful, but they do exist.

There are certain ANN architectures that relied on, essentially, classical conditioning based on Hobbian learning rules and variants thereof. Kohonen self-organizing maps are an example of that.

Not that such historical systems are popular today, though.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: