Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In Li Deng's and Dong Yu's book on Deep Learning (March 2014) version they briefly relate Hierarchical Temporal Memory (HTM) to the Convolutional Neural Networks which are popular for Deep Learning.

http://research.microsoft.com/apps/pubs/default.aspx?id=2093...

It's worth noting that most people doing Deep Learning aren't trying to replicate the brain, but just want to do a better job at Machine Learning (ML) and Artificial Intelligence. Here's how I see it as someone working on Deep Learning; someone correct me if I'm wrong.

Deep Learning: Trying to do ML - yes Trying to replicate brain - no (for the most part)

Numenta (HTM/CLA): Trying to do ML - yes (not sure how much they succeed) Trying to replicate brain - yes, but (i) we don't know exactly how the brain works (ii) they make approximations

Projects like Nengo (http://nengo.ca/): Trying to do ML - no Trying to replicate brain - yes

I'm not very familiar with Nengo.

Edit: formatting



Well,

It seems like there ought to be a level between "simulating the brain" and just coming up with your own algorithm. I would imagine that level as "seeing what the brain can do at a particular low level, seeing how close you can come to duplicating that, see what unique approach you can derive there, apply to other other, repeat". That level would be "inspired by the brain without trying to simulate it". It seems like in his popular talks Hawkins implies he's doing that but that in his actual software, as you mention, he winds-up doing just a variation of standard machine learning.

It would be nice if he had postponed deciding he had a solution and instead kept banging on the problem of what algorithms can be kind of like X or Y thing that the brain appears to do. I'd like to think you could mine a bunch of ideas from this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: