Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well classical conditioning kind of only makes sense in the context of an agent that is receiving inputs and taking actions on them. Many neural networks don't solve problems of that type, and so have no need for classical conditioning.

But when you do have such a problem conditioning is not very complicated. The normal algorithms and neural structures are designed to learn stuff like "when a given input happens a certain action must be taken" and thats all you really need for conditioning. How it actually does it? Well I guess with gradient descent it would work something like this: Every time there is a puff of air the network will be like "damn I should have blinked to avoid this" and so it makes its current internal statement a little more likely to lead to blinking. Gradually as it happens more times it will learn a strong association for the ringing bell or whatever.

A small RNN could learn this.



Yeah. It's just not quite clear what the minimal example for such a network would be. I assume you have N inputs and one output. The output always active when input 1 is active, otherwise the output is inactive. So the other inputs are ignored. However, when one of those other inputs, x, tends to be temporally correlated with input 1, after a while x will generate an output upon activation even if input 1 isn't active. If x becomes decorrelated with input 1, x will again get ignored. Not sure what the simplest network architecture looks like that implements this behavior.


> The output always active when input 1 is active.

Neural networks don't have instinctive behavior like that.


Things can be hardwired




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: