Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Numenta Platform for Intelligent Computing (github.com/numenta)
76 points by Playnetway on July 21, 2014 | hide | past | favorite | 25 comments


Sadly, this has the ear-marks of "we've abandoned the project we previously claimed was worth thousands-per-license and so we thought we'd open-source it".

It would have been much nicer is Numenta had done open source when they had money and people working for them.

It's a shame also in the sense that while Jeff Hawkin's overall paradigm is certainly too simplistic and too ready to dismiss other research, I think his call to have broad paradigms[1] that are made explicit is good even if modern neuroscientists are more aware of the problems he mentions.

[1] http://www.ted.com/talks/jeff_hawkins_on_how_brain_science_w...


This project is not abandoned, it is thriving. See http://www.ohloh.net/p/nupic, as well as the original announcement at http://numenta.org/blog/2013/06/03/introducing-nupic.html. The old closed-source version was abandoned a long time ago, and this is the newest stuff. Check out the wiki for videos, examples, tutorials, etc. https://github.com/numenta/nupic/wiki


> It's a shame also in the sense that while Jeff Hawkin's overall paradigm is certainly too simplistic.

In what ways is the paradigm too simplistic? Frankly, the fact that there is simplicity in his theories of intelligence makes more of case for him than against him. Most modern neuroscientists trying to understand intelligence have taken such an extreme reductionist point of view that they seem to be increasingly befuddled.

By the way the Ted Talk is from 2003 which is around the time that his theory of intelligence more or less reflected what he proposed in his book, On Intelligence, about Hierarchical Temporal Memory.

Since 2007 I think they've made significant progress. Have you read Numenta's white paper on the Cortical Learning Algorithm. He's also given a much more recent Google Talk on Sparse Distributed Representations.


I have followed the progress of Hawkins and Numenta to an extent. The videos (the Google and sitkack's video) still have the same theme - it's all about prediction and pattern recognition in the simplistic sense.

What missing from this? Off the top of my head:

* Language,

* Goal-oriented interaction with the environment

* General purpose reasoning - things generating novel behaviors based on observations of the environment. Especially, dealing with multiple interacting constrai nts on an ad-hoc basis and deciding which is most important.

And I'm not arguing for human behavior being all rational deduction - pattern recognition and such are a huge part of human behavior but all the ways humans or even animals can change their behavior are where biological intelligence really goes past current versions of computer intelligence. The thing with Jeff's talk is that it may well be that the bulks of raw brain activity is focused on just processing raw streams of data. The truth of this doesn't mean this is the thing that makes the brain seem intelligent in a different fashion from a video camera.


I believe the basic building blocks (prediction, pattern recognition, attention, etc.) give rise to the higher-level phenomenon that you mentioned. I think we need to first understand those fundamental principles, and we'll be able to infer most of the rest from that point.

Emergent phenomenon can seem complicated and impossible to understand, but the mechanisms that give rise to them are usually simple (for example, evolution creating diverse and intricate life).



When I read On Intelligence and from there found Numenta's stuff years ago, I was pretty excited. There were some crazy cool demos back then, and they've occasionally popped back up with more.

So it's too bad about all of the patents, then, now, and forthcoming. They promise not to sue [1], and pledge that future patents are for the 'protection of the NuPIC community'. Maybe, but I'll spend my time with one of the many open-source projects without patent pledges of nebulous enforceability. Most of them seem to do just fine without patent guardians.

[1]: http://numenta.org/blog/2013/07/01/patent-position.html


"Why would we continue to file patents on work that is going to be open source? The principle reason is to protect the NuPIC community. For example, outside developers could work on similar concepts without becoming part of the open source community. They could seek patents on their own work, making it proprietary and blocking progress of open source NuPIC developers. By keeping our patent portfolio current, we retain the ability to protect the NuPIC community from these threats. In other words, by holding patents on the work, we are able to protect the whole community from others who might seek to wall off their work through patents. In addition to filing select patents going forward, we also will evaluate other measures that would enhance patent protection for the NuPIC community."

Since this is a long-term project, it's more important that Numenta is able to protect the community it is building from patent trolls, and this is one approach to doing that.


Most open source projects do not need a foundation or company patenting things related to the work and acting as a guardian. Why does NuPIC?

The blog post (which is not a legally binding contract in any way) also has this little gem:

"It should be noted that Numenta/Grok holds patents that do not pertain to the algorithms released in NuPIC. We do not view these patents as covered under the GPL, and we reserve the right to use these patents in the normal course of our business."

Assuming this were a legally binding document--which it's not--who would decide which of Numenta's patents are assigned by the GPLv3 and which are not?

I'm happy you are trying to open-source such a cool piece of tech. But this is the patent policy of a company hedging its bets, not a company that's giving something to the world. It leaves Numenta legally in charge of the NuPIC community, instead of letting it evolve, because it's the only entity that can write a GPLv3 on future patents.

At least I can download and play with the GPLv3 version. The old license was so onerous that I didn't want to see the code, lest I open myself to patent liability 10 years down the line for using something kinda sorta like NuPIC.

But I wouldn't build a business on software with this kind of patent policy, and the commercial licenses Numenta sells make me think you'd rather I didn't.


> Most open source projects do not need a foundation or company patenting things related to the work and acting as a guardian. Why does NuPIC?

Because they believe it will be a multi-billion dollar industry in the next decade.


> who would decide which of Numenta's patents are assigned by the GPLv3 and which are not?

The GPL text is rather precise in how to determine which patents are effected. Any patents that would be infringed by some manner of using, making, (...), or modify that specific version of the program is covered by the license.

As far as patent grants goes, it is hard to make something cover beyond that. I guess a license could say "you may not own any patents, and that is the final word", but I do not know any licenses that does that.


According to http://numenta.org/ - "the learning algorithms faithfully capture how layers of neurons in the neocortex learn". Could someone explain how this is different from deep learning and/or neural nets?


Andrew Ng himself said that he was inspired from the ideas that Jeff Hawkins put forth in his book "On Intelligence" which could explain some of the similarities with Deep Learning [1]. But honestly, event still, Deep Learning doesn't seem to model many of the understood principles of the neocortex at all. I read a relatively recent paper by Andrew Ng [2] about Deep Learning optimized for GPU and though it resembles some of the hierarchical aspects of the neocortex, it doesn't really go any further.

I recommend that, if you're interested, you read the [3] CLA white paper for more details but the main difference I see is that the CLA tries to model the concept of storing as sparse distributed representations by modeling neocortical columns. The problem there is that even today, neuroscientists don't agree on any one theory of its structure and function. And frankly the CLA's theory neocortical columns seems to be the most sane. This is based on some of [4] Gerard Rinkus's research on the functions of neocortical columns.

Basically, in my opinion, there is A LOT more neuroscience in HTM-CLA then there is in Deep Learning. And I'm pretty sure that Deep Learning will converge on much of the concepts put forth by the CLA. It really shouldn't be seen as a competition in the first place I suppose, but the theories in AI and theoretical neuroscience are converging pretty fast already.

[1]: http://www.wired.com/2013/05/neuro-artificial-intelligence/a...

[2]: http://web.stanford.edu/~acoates/papers/CoatesHuvalWangWuNgC...

[3]: http://numenta.org/resources/HTM_CorticalLearningAlgorithms....

[4]: http://people.brandeis.edu/~grinkus/Analog_Devices_Lyric_Tal...


There is a much bigger problem with basing your model of the brain on cortical columns, which is that they don't exist outside visual cortex and the whisker region of sensory cortex in rats and mice. The idea of a repeating functional unit was so appealing that many neuroscientists have just refused to give it up, in a kind of collective wishful thinking. There was an excellent review paper in 2005, "The cortical column: a structure without a function", which is basically the emperor-has-no-clothes of this field.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1569491/


Yup, read that a while back actually. And it makes some good points. It should be titled "The cortical column: a structure without an agreed upon function".

The cortical column are a lot more well define, referred to as ocular dominance columns, in the visual cortex. The problem is that the structure of cortical columns are very malleable and plastic. So it makes it very difficult to see them consistently throughout the neocortex. So there isn't much definitive proof for cortical columns throughout the neocortex but there is convincing theory, very much pushed by Hawkins.

There is a large consensus that the neocortex stores and acts on information in a distributed way. Most of the well defined theories propose some kinds of neural engrams. But there wasn't any theory about how the neocortex stored information in a distributed way. The function of neocortical columns, as proposed by Rinkus, seems to explain very convincing one such way of creating Sparse Distributed Representations.

In terms of theory, my opinion is that cortical columns seem to be integral to a unifying theory of the neocortex.


I haven't read a book, only read a whitepaper and followed news around Numenta. Another difference is that deep learning works in practice to acheive state of the arts results, beat benchmarks and drive huge production systems, while CLA/HTM is yet to demonstrate a good result on a public dataset. If you're good focus on convincing other scientisis you're good, not on impressing beginners. It looks like it is developed as a one-man show outside the traditional ML/statistics world. Maybe the ideas are interesting, and maybe they are good, but I don't understand why are they receiving so much publicity now.


In Li Deng's and Dong Yu's book on Deep Learning (March 2014) version they briefly relate Hierarchical Temporal Memory (HTM) to the Convolutional Neural Networks which are popular for Deep Learning.

http://research.microsoft.com/apps/pubs/default.aspx?id=2093...

It's worth noting that most people doing Deep Learning aren't trying to replicate the brain, but just want to do a better job at Machine Learning (ML) and Artificial Intelligence. Here's how I see it as someone working on Deep Learning; someone correct me if I'm wrong.

Deep Learning: Trying to do ML - yes Trying to replicate brain - no (for the most part)

Numenta (HTM/CLA): Trying to do ML - yes (not sure how much they succeed) Trying to replicate brain - yes, but (i) we don't know exactly how the brain works (ii) they make approximations

Projects like Nengo (http://nengo.ca/): Trying to do ML - no Trying to replicate brain - yes

I'm not very familiar with Nengo.

Edit: formatting


Well,

It seems like there ought to be a level between "simulating the brain" and just coming up with your own algorithm. I would imagine that level as "seeing what the brain can do at a particular low level, seeing how close you can come to duplicating that, see what unique approach you can derive there, apply to other other, repeat". That level would be "inspired by the brain without trying to simulate it". It seems like in his popular talks Hawkins implies he's doing that but that in his actual software, as you mention, he winds-up doing just a variation of standard machine learning.

It would be nice if he had postponed deciding he had a solution and instead kept banging on the problem of what algorithms can be kind of like X or Y thing that the brain appears to do. I'd like to think you could mine a bunch of ideas from this.


Ok, now how does this compare with Goodman and Tenenbaum's work on hierarchical Bayesian inference?


How is the approach different than the technology used at Vicarious (where Numenta cofounder Dileep George went) ?


I don't think information about how Vicarious systems work is publicly available. Presumably they're using something similar to HTM.


"biologically accurate neural network"

ROLF :D Seriously?

Hit me up with some evidence to back up that extraordinary claim.


Any one of the pieces of literature they've produced? On Intelligence, or the whitepaper. They both go into detail about the structures they've found in the brain, and how their neural network corresponds to them.


Does is run on the Pilot?


Sorry, only with a Springboard on a newer Visor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: