Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Soar Cognitive Architecture (umich.edu)
51 points by zetalyrae on Sept 8, 2021 | hide | past | favorite | 14 comments



It is ironic that the homepage for a tool that is supposed to help research cognition (?) is doing such a bad job explaining what it is.


The more informative info is past the wall of hyperlinks above the fold.

Soar is a general cognitive architecture for developing systems that exhibit intelligent behavior. Researchers all over the world, both from the fields of artificial intelligence and cognitive science, are using Soar for a variety of tasks. It has been in use since 1983, evolving through many different versions to where it is now Soar, Version 9.

At least we don't have to join a Telegram group to get more info.


Cognitive architectures (like Soar, ACT-R, Sigma) are half psychology and cognitive science (building general, computational models of the human mind to understand it) and half AI (a continuation of GOFAI, using similar symbolist/structured approaches to intelligence).

Probably the most famous application of Soar is TacAir-Soar: https://soartech.com/portfolio-posts/automated-intelligent-p...

>TACAIR-SOAR is an intelligent, rule-based system that generates believable humanlike behavior for largescale, distributed military simulations. The innovation of the application is primarily a matter of scale and integration. The system is capable of executing most of the airborne missions that the U.S. military flies in fixed-wing aircraft. It accomplishes its missions by integrating a wide variety of intelligent capabilities, including real-time hierarchical execution of complex goals and plans, communication and coordination with humans and simulated entities, maintenance of situational awareness, and the ability to accept and respond to new orders while in flight.

For a successor project, there's the Sigma cognitive architecture, built by Paul Rosenbloom, who used to work on Soar: https://cogarch.ict.usc.edu/


These projects are a source of fascination to me, however a persistent question for me is how do these more symbolic approaches to cognitive modelling figure in today's world of ML and data-driven AI? I'm very curious to know where and to what extent the symbolic approaches of the past (and present) meet with ML? I ask because you clearly have some exposure to these sorts of projects - any sources you can provide would be appreciated.


I'm very curious to know where and to what extent the symbolic approaches of the past (and present) meet with ML?

If you had a good answer to that, you'd probably be well on your way to a Ph.D., if not a Turing Award. The question of symbolic/sub-symbolic integration has been a big outstanding question in the AI world for a very long time now. I don't think many people were actively working on it for quite a while, but it seems like there has been at least a small uptick in interest in that idea recently. My personal belief is that this kind of integration will be essential, at least in the short term, to achieving something like what we might actually call AGI. And while I'm hardly alone in thinking this, this position is by no means universally held. There are people (Geoff Hinton among others, if memory serves correctly) who believe that "neural nets are completely sufficient".

And frankly, in the long (enough) term that might be right. Build ANN's that are sufficiently deep, sufficiently wide, and with just the right initial architecture, and maybe you get something that develops "the master algorithm" and figures it all out on its own. I think that's probably possible in principle; but my doubt about all of that is more about how realistic it is, especially over shorter time scales.

Anyway, if you're really interested in the topic, Ben Goertzel's OpenCog system includes a strong focus on symbolic/sub-symbolic integration, and borrows a lot of ideas from some well-known cognitive architecture work (LIDA, in particular).

Also, googling "symbolic / sub-symbolic integration" will turn up a ton of sites / papers / books / etc. that go into far more detail.

https://www.google.com/search?channel=fs&client=ubuntu&q=sym...

One book length treatment of this topic that I'm aware of (but not deeply familiar with) is this one, by Ron Sun:

https://www.amazon.com/Connectionist-Symbolic-Integration-Un...


I went very deep into OpenCog and finally had to concede that there just wasn't enough rigor and coordination between the compoents. Goertzel seems easily distracted by various other subjects. I realize that he has to figure out ways to fund his work, so I am not being judgemental.

In addition to symbolic and deep learning, future AI systems will most likely have a causal learning component. Judea Pearl has been working on this subject for years. http://bayes.cs.ucla.edu/jp_home.html


Good points all around. I think OpenCog has a lot of good ideas, but I won't claim that it's the "be all, end all", as of today. That said, I think to some extent the statement "there just wasn't enough rigor and coordination between the compoents" may be true exactly because that is the central challenge that still remains to be solved.

At the very least, I think reading Goertzel's books[1] and looking at OpenCog is a good introduction to the issues at hand in a general sense.

Totally agree on the causal learning thing. And that's an area that also seems to have had a resurgence of interest and activity lately.

[1]: Here I specifically mean Engineering General Intelligence, Volumes 1 & 2


Thanks for the leads on this - very excited to look further into it.


Cognitive architecture research is an interesting mix of AI and cognitive science. It's a different approach from the usual: get a ton of data and train a neural network.

I had the privilege of taking multiple AI classes taught by Professor Laird during my masters. He is a great person. I learned a lot from him, both intellectually and personally. Here's a recent video overview of Soar https://www.youtube.com/watch?v=BUiWk-DqLaA


I think a combination of cognitive architecture and other techniques from the symbolic era of AI are going to needed to get us to the next big step. It seems unlikely that training larger and larger models packed full of hidden layers can get all the way to AGI (or whatever you want to call the next evolution.)


I think only by mimicking the natural cognition and respecting its physical constraints we'll get there.

There are some pieces of "biologically-inspired" mechanisms here and there in current connectionist methods, like convolutional nets based on Neocognitron which was in turn based on visual cortex research of the 80s. And this is probably why some these models work well in their narrow domain.

Or in cognitive architectures like Soar or Act which borrow some pieces from cognitive psychology and memory research. But IMO they are too model-heavy and this is probably the reason they never caught up or even worked as well as connectionist systems.

Being simply "inspired" by nature is probably not enough, we'd have to really understand how the animal brain works at a system level to get to actual AI.

Kind of like figuring out the law of gravity from a fall of an apple.


>But IMO they are too model-heavy and this is probably the reason they never caught up or even worked as well as connectionist systems.

there are countless of model based / symbolic intelligent systems at work right now. I'd go so far as say that there is no fully complex 'end-to-end' learning system in place anywhere in the outside world.

Self-driving cars say use learning systems for perception but they still deploy human expertise, routing algorithms, the software at an engineering level is a designed architecture, and so on. I don't think any autonomous car has actually learned speed limits which would be a rather risky endeavor.


there is a list of popular cognitive architectures (including Soar and more recent ones from the likes of Kanerva and Eliasmith): https://en.wikipedia.org/wiki/Cognitive_architecture#Notable...

Eliasmith created "Semantic Pointer Architecture" which is sort of a crossover between CS and CogSci https://en.wikipedia.org/wiki/Spaun_(Semantic_Pointer_Archit..., also see his book on "How to build a Brain"

I wish the current AI research would get unstuck from local minima of deep learning, there's been so much progress in cognitive sciences in the last few decades.


Wow, when I arrived at CMU for my PhD in 1994 my officemate Erik Altmann was already working on Soar. That's longevity.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: