To clarify, we found the range shrink if one trains for a fixed number of epochs and expand if one trains for a fixed number of steps. So tuning can be easier or harder depending on the budget.
An experienced machine learning researcher like Andrew Ng would probably not join the team as a Brain resident. We hire experienced machine learning researchers and engineers all the time (see https://careers.google.com/jobs#t=sq&q=j&li=20&l=false&jlo=e... ) and the residency program is probably not appropriate for people who are already experts. It is a program designed to help people become experts in machine learning.
For residents we look for some programming ability, mathematical ability, and machine learning knowledge. If an applicant knows absolutely nothing about machine learning, it would be strange (why apply?). We accept people who are not machine learning experts, but we want to be sure that people know enough about machine learning to be making an informed choice about trying to become machine learning researchers. Applicants need to have enough exposure to the field to have some idea of what they are getting into and have the necessary self-knowledge to be passionate about machine learning research.
We don't have a simple separation of concerns like that. Brain and DeepMind share a common vision around advancing the state of the art in machine learning in order to have a positive impact on the world. Because machine intelligence is such a huge area, it is useful to have multiple large teams doing research in this area (unlike two product teams making the same product, two research teams in the same area just produces more good research). We follow each other's work and collaborate on a number of projects, although timezone differences sometimes make this hard. I am personally collaborating on a project with a colleague at DeepMind that is a lot of fun to work on.
Yes! This is exactly what I was hoping for in the comments. I found the article intriguing but could see that there was a lot more to say.
Having read the interview, what I really take away from this is a rationality success story. Chu kept a simple focus on what he was trying to achieve, and worked to maximise his chances.
There were a lot of pros a DeepMind. For example: Volodymyr Mnih, Andriy Mnih, Alex Graves, Koray Kavukcuoglu, Karol Gregor, Guillaume Desjardins, David Silver, and a bunch more I am forgetting.
It has content and communicates an approach to machine learning distinct from other approaches. It isn't like "big data" which is truly meaningless. However, deep learning is also not a single method or algorithm.
I would have described the library in question as a "GPU-Accelerated Neural Network library" since that is more descriptive.
NuPIC was released as free software, in part, because there has been a strong interest in the academic research community. I encourage you to check out the mailing list (http://lists.numenta.org/mailman/listinfo/nupic_lists.nument...) for some recent examples.
Generally, the most relevant academic community would be the NIPS community and I have not noticed any Numenta papers at NIPS, but please point me towards any I have missed if you are aware of any. I expect a lot of people have an opinion along these lines: http://developers.slashdot.org/comments.pl?sid=225476&cid=18...
Don't get me wrong, I would love to see numenta produce something of value for the ML community, but it doesn't look good so far.
It runs about as fast as any of the other popular machine learning frameworks, occasionally faster.
Disclaimer: I work for Google and use JAX, although I'm not on the Jax team.