Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cognitive architecture research is an interesting mix of AI and cognitive science. It's a different approach from the usual: get a ton of data and train a neural network.

I had the privilege of taking multiple AI classes taught by Professor Laird during my masters. He is a great person. I learned a lot from him, both intellectually and personally. Here's a recent video overview of Soar https://www.youtube.com/watch?v=BUiWk-DqLaA



I think a combination of cognitive architecture and other techniques from the symbolic era of AI are going to needed to get us to the next big step. It seems unlikely that training larger and larger models packed full of hidden layers can get all the way to AGI (or whatever you want to call the next evolution.)


I think only by mimicking the natural cognition and respecting its physical constraints we'll get there.

There are some pieces of "biologically-inspired" mechanisms here and there in current connectionist methods, like convolutional nets based on Neocognitron which was in turn based on visual cortex research of the 80s. And this is probably why some these models work well in their narrow domain.

Or in cognitive architectures like Soar or Act which borrow some pieces from cognitive psychology and memory research. But IMO they are too model-heavy and this is probably the reason they never caught up or even worked as well as connectionist systems.

Being simply "inspired" by nature is probably not enough, we'd have to really understand how the animal brain works at a system level to get to actual AI.

Kind of like figuring out the law of gravity from a fall of an apple.


>But IMO they are too model-heavy and this is probably the reason they never caught up or even worked as well as connectionist systems.

there are countless of model based / symbolic intelligent systems at work right now. I'd go so far as say that there is no fully complex 'end-to-end' learning system in place anywhere in the outside world.

Self-driving cars say use learning systems for perception but they still deploy human expertise, routing algorithms, the software at an engineering level is a designed architecture, and so on. I don't think any autonomous car has actually learned speed limits which would be a rather risky endeavor.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: